index
int64
0
20.3k
text
stringlengths
0
1.3M
year
stringdate
1987-01-01 00:00:00
2024-01-01 00:00:00
No
stringlengths
1
4
1,800
The Missing Link - A Probabilistic Model of Document Content and Hypertext Connectivity David Cohn Burning Glass Technologies 201 South Craig St, Suite 2W Pittsburgh, PA 15213 david. cohn @burning-glass.com Thomas Hofmann Department of Computer Science Brown University Providence, RI 02192 th@cs.brown.edu Abstract We describe a joint probabilistic model for modeling the contents and inter-connectivity of document collections such as sets of web pages or research paper archives. The model is based on a probabilistic factor decomposition and allows identifying principal topics of the collection as well as authoritative documents within those topics. Furthermore, the relationships between topics is mapped out in order to build a predictive model of link content. Among the many applications of this approach are information retrieval and search, topic identification, query disambiguation, focused web crawling, web authoring, and bibliometric analysis. 1 Introduction No text, no paper, no book can be isolated from the all-embracing corpus of documents it is embedded in. Ideas, thoughts, and work described in a document inevitably relate to and build upon previously published material. 1 Traditionally, this interdependency has been represented by citations, which allow authors to explicitly make references to related documents. More recently, a vast number of documents have been "published" electronically on the world wide web; here, interdependencies between documents take the form of hyperlinks, and allow instant access to the referenced material. We would like to have some way of modeling these interdependencies, to understand the structure implicit in the contents and connections of a given document base without resorting to manual clustering, classification and ranking of documents. The main goal of this paper is to present a joint probabilistic model of document content and connectivity, i.e., a parameterized stochastic process which mimics the generation of documents as part of a larger collection, and which could make accurate predictions about the existence of hyperlinks and citations. More precisely, we present an extension of our work on Probabilistic Latent Semantic Analysis (PLSA) [4, 7] and Probabilistic HITS (PHITS) [3, 8] and propose a mixture model to perform a simultaneous decomposition of the contingency tables associated with word occurrences and citations/links into "topic" factors. Such a model can be extremely useful in many applications, a few of which are: • Identifying topics and common subjects covered by documents. Representing 1 Although the weakness of our memory might make us forget this at times. documents in a low-dimensional space can help understanding of relations between documents and the topics they cover. Combining evidence from terms and links yields potentially more meaningful and stable factors and better predictions. • Identifying authoritative documents on a given topic. The authority of a document is correlated with how frequently it is cited, and by whom. Identifying topicspecific authorities is a key problems for search engines [2]. • Predictive navigation. By predicting what content might be found "behind" a link, a content/connectivity model directly supports navigation in a document collection, either through interaction with human users or for intelligent spidering. • Web authoring support. Predictions about links based on document contents can support authoring and maintenance of hypertext documents, e.g., by (semi-) automatically improving and updating link structures. These applications address facets of one of the most pressing challenges of the "information age": how to locate useful information in a semi-structured environment like the world wide web. Much of this difficulty, which has led to the emergence of an entire new industry, is due to the impoverished explicit structure of the web as a whole. Manually created hyperlinks and citations are limited in scope - the annotator can only add links and pointers to other document they are aware of and have access to. Moreover, these links are static; once the annotator creates a link between documents, it is unchanging. If a different, more relevant document appears (or if the cited document disappears), the link may not get updated appropriately. These and other deficiencies make the web inherently "noisy" - links between relevant documents may not exist and existing links might sometimes be more or less arbitrary. Our model is a step towards a technology that will allow us to dynamically infer more reliable inter-document structure from the impoverished structure we observe. In the following section, we first review PLSA and PHITS. In Section 3, we show how these two models can be combined into a joint probabilistic term-citation model. Section 4 describes some of the applications of this model, along with preliminary experiments in several areas. In Section 5 we consider future directions and related research. 2 PLSA and PHITS PLSA [7] is a statistical variant of Latent Semantic Analysis (LSA) [4] that builds a factored multinomial model based on the assumption of an underlying document generation process. The starting point of (P)LSA is the term-document matrix N of word counts, i.e., Nij denotes how often a term (single word or phrase) ti occurs in document dj . In LSA, N is decomposed by a SVD and factors are identified with the left/right principal eigenvectors. In contrast, PLSA performs a probabilistic decomposition which is closely related to the non-negative matrix decomposition presented in [9]. Each factor is identified with a state Zk (1 :::; k :::; K) of a latent variable with associated relative frequency estimates P(ti IZk) for each term in the corpus. A document dj is then represented as a convex combination of factors with mixing weights P(Zk Idj ), i.e., the predictive probabilities for terms in a particular document are constrained to be of the functional form P(ti ldj) = I:k P(ti lzk)P(Zk Idj ), with non-negative probabilities and two sets of normalization constraints I:i P(ti IZk) = 1 for all k and I:k P(Zk Idj ) = 1 for all.i. Both the factors and the document -specific mixing weights are learned by maximizing the likelihood of the observed term frequencies. More formally, PLSA aims at maximizing L = I:i,j Nij IogI:kP(til zk)P(zkldj). Since factors Zk can be interpreted as states of a latent mixing variable associated with each observation (i.e., word occurrence), the Expectation-Maximization algorithm can be applied to find a local maximum of L. PLSA has been demonstrated to be effective for ad hoc information retrieval, language modeling and clustering. Empirically, different factors usually capture distinct "topics" of a document collection; by clustering documents according to their dominant factors, useful topic-specific document clusters often emerge (using the Gaussian factors of LSA, this approach is known as "spectral clustering"). It is important to distinguish the factored model used here from standard probabilistic mixture models. In a mixture model, each object (such as a document) is usually assumed to come from one of a set of latent sources (e.g. a document is either from Zl or Z2). Credit for the object may be distributed among several sources because of ambiguity, but the model insists that only one of the candidate sources is the true origin of the object. In contrast, a factored model assumes that each object comes from a mixture of sources without ambiguity, it can assert that a document is half Zl and half Z2. This is because the latent variables are associated with each observation and not with each document (set of observations). PHITS [3] performs a probabilistic factoring of document citations used for bibliometric analysis. Bibliometrics attempts to identify topics in a document collection, as well as influential authors and papers on those topics, based on patterns in citation frequency. This analysis has traditionally been applied to references in printed literature, but the same techniques have proven successful in analyzing hyperlink structure on the world wide web [8]. In traditional bibliometrics, one begins with a matrix A of document-citation pairs. Entry Aij is nonzero if and only if document di is cited by document dj or, equivalently, if dj contains a hyperlink to di .2 The principal eigenvectors of AA' are then extracted, with each eigenvector corresponding to a "community" of roughly similar citation patterns. The coefficient of a document in one of these eigenvectors is interpreted as the "authority" of that document within the community how likely it is to by cited within that community. A document's coefficient in the principal eigenvectors of A' A is interpreted as its "hub" value in the community how many authoritative documents it cites within the community. In PHITS, a probabilistic model replaces the eigenvector analysis, yielding a model that has clear statistical interpretations. PHITS is mathematically identical to PLSA, with one distinction: instead of modeling the citations contained within a document (corresponding to PLSA's modeling of terms in a document), PHITS models "inlinks," the citations to a document. It substitutes a citation-source probability estimate P( cl lzk) for PLSA's term probability estimate. As with PLSA and spectral clustering, the principal factors of the model are interpreted as indicating the principal citation communities (and by inference, the principal topics). For a given factor/topic Zb the probability that a document is cited, P( dj IZk ), is interpreted as the document's authority with respect to that topic. 3 A Joint Probabilistic Model for Content and Connectivity Linked and hyperlinked documents are generally composed of terms and citations; as such, both term-based PLSA and citation-based PHITS analyses are applicable. Rather than applying each separately, it is reasonable to merge the two analyses into ajoint probabilistic model, explaining terms and citations in terms of a common set of underlying factors. Since both PLSA and PHITS are based on a similar decomposition, one can define the following joint model for predicting citationsllinks and terms in documents: P(tildj ) = LP(ti!zk )P(Zkldj) , P(czldj) = LP(Cl !zk )P(Zkldj). (1) k k Notice that both decompositions share the same document-specific mixing proportions P(Zk Idj ). This couples the conditional probabilities for terms and citations: each "topic" 2In fact, since multiple citationsllinks may exist, we treat A ij as a count variable. has some probability P( cllzk) of linking to document dl as well as some probability P(tilzk) of containing an occurrence of tenn ti. The advantage of this joint modeling approach is that it integrates content- and link-information in a principled manner. Since the mixing proportions are shared, the learned decomposition must be consistent with content and link statistics. In particular, this coupling allows the model to take evidence about link structure into account when making predictions about document content and vice versa. Once a decomposition is learned, the model may be used to address questions like "What words are likely to be found in a document with this link structure?" or "What link structure is likely to go with this document?" by simple probabilistic inference. The relative importance one assigns to predicting terms and links will depend on the specific application. In general, we propose maximizing the following (nonnalized) loglikelihood function with a relative weight 0:. L = ~ [0: ~ L~Xri'j 10g~P(tilzk)P(zk l dj) + (1 - 0:) L L A~,. log LP(CI IZk)P(Zkldj )] I I' I J k (2) The normalization by term/citation counts ensures that each document is given the same weight in the decomposition, regardless of the number of observations associated with it. Following the EM approach it is straightforward to derive a set of re-estimation equations. For the E-step one gets formulae for the posterior probabilities of the latent variables associated with each observation3 4 Experiments In the introduction, we described many potential applications of the the joint probabilistic model. Some, like classification, are simply extensions of the individual PHITS and PLSA models, relying on the increased power of the joint model to improve their performance. Others, such as intelligent web crawling, are unique to the joint model and require its simultaneous modelling of a document's contents and connections. In this section, we first describe experiments verifying that the joint model does yield improved classification compared with the individual models. We then describe a quantity called "reference flow" which can be computed from the joint model, and demonstrate its use in guiding a web crawler to pages of interest. 30ur experiments used a tempered version of Equation 3 to minimize overfitting; see [7] for details. 0.38 0_34 0.32 0.3 0.28 0.26 0.24 VVebKB da.t:a.. --s1:c:I error ~ o 0.2 0.4 0.6 0.8 a.lpha. 1 0 . 5 0.45 0.4 0.35 0.3 0.25 Cora.. dat:a. --stc:l error ~ o 0.2 0.4 0.6 0.8 alpha. Figure 1: Classification accuracy on the WebKB and Cora data sets for PHITS (a = 0), PLSA (a = 1) and the joint model (0 < a < 1). We used two data sets in our experiments. The WebKB data set [11], consists of approximately 6000 web pages from computer science departments, classified by school and category (student, course, faculty, etc.). The Cora data set [10] consists of the abstracts and references of approximately 34,000 computer science research papers; of these, we used the approximately 2000 papers categorized into one of seven subfields of machine learning. 4.1 Classification Although the joint probabilistic model performs unsupervised learning, there are a number of ways it may be used for classification. One way is to associate each document with its dominant factor, in a form of spectral clustering. Each factor is then given the label of the dominant class among its associated documents. Test documents are judged by whether their dominant factor shares their label. Another approach to classification (but one that forgoes clustering) is a factored nearest neighbor approach. Test documents are judged against the label of their nearest neighbor, but the "nearest" neighbor is determined by cosines of their projections in factor space. This is the method we used for our experiments. For the Cora and WebKB data, we used seven factors and six factors respectively, arbitrarily selecting the number to correspond to the number of human-derived classes. We compared the power of the joint model with that of the individual models by varying a from zero to one, with the lower and upper extremes corresponding to PHITS and PLSA, respectively. For each value of a, a randomly selected 15% of the data were reserved as a test set. The models were tempered (as per [7]) with a lower limit of (3 = 0.8, decreasing (3 by a factor of 0.9 each time the data likelihood stopped increasing. Figure 1 illustrates several results. First, the accuracy of the joint model (where a is neither o nor 1), is greater than that of either model in isolation, indicating that the contents and link structure of a document collection do indeed corroborate each other. Second, the increase in accuracy is robust across a wide range of mixing proportions. 4.2 Reference Flow The previous subsection demonstrated how the joint model amplifies abilities found in the individual models. But the joint model also provides features found in neither of its progenitors. A document d may be thought of as occupying a point Z = {P(zl ld) , ... ,P(Zk ld)} in the joint model's space of factor mixtures. The terms in d act as "signposts" describing Z, and the links act as directed connections between that point and others. Together, they provide a reference flow, indicating a referential connection between one topic and another. This reference flow exists between arbitrary points in the factor space, even in the absence of documents that map directly to those points. Consider a reference from document di to document dj , and two points in factor space zm and zn, not particularly associated with di or dj . Our model allows us to compute P(di lzm) and P(dj l~) , the probability that the combination of factors at zm and ~ are responsible for di and dj respectively. Their product P(dil~) P(dj lzn) is then the prob~prOj student! dept.! faculty depa~ent 1J faculty ¢::::::l course ~ Figure 2: Principal reference flow between the primary topics identified in the examined subset of the WebKB archive. ability that the observed link represents a reference between those two points in factor space. By integrating over all links in the corpus we can compute, fmn = 2:. ·A .. ...i.oP(dilzm)P(dJ·I~), an unnormalized "reference flow" between zm and ~. ~,J. 1,l T Figure 2 shows the principal reference flow between several topics in the WebKB archive. 4.3 Intelligent Web Crawling with Reference Flow Let us suppose that we want to find new web pages on a certain topic, described by a set of words composed into a target pseudodocument dt . We can project dt into our model to identify the point Zt in factor space that represents that topic. Now, when we explore web pages, we want to follow links that will lead us to new documents that also project to £,;. To do so, we can use reference flow. Consider a web page ds (or section of a web page4). Although we don't know where its links point, we do know what words it contains. We can project them as a peudodocument to find Zs the point in factor space the page/section occupies, prior to any information about its links. We can then use our model to compute the reference flow fst indicating the (unnormalized) probability that a document at Zs would contain a link to one at Zt. As a greedy solution, we could simply follow links in documents or sections that have the highest reference flow toward the target topic. Or if computation is no barrier, we could (in theory) use reference flow as state transition probabilities and find an optimal link to follow by 350'-~~---t=,u-e~s-ou~m~e-_ ~_~_-' 'placebo' --- --300 250 ~ 200 ~ I 150 100 50 102030405060708090100 rank Figure 3: When ranked according to magnitude of reference flow to a designated target, a "true source" scores much higher than a placebo source document drawn at random. treating the system as a continuous-state Markov decision process. 4Tbough not described here, we have had success using our model for document segmentation, following an approach similar to that of [6]. By projecting successive n-sentence windows of a document into the factored model, we can observe its trajectory through "topic space." A large jump in the factor mixture between successive windows indicates a probable topic boundary in document. To test our model's utility in intelligent web crawling, we conducted experiments on the WebKB data set using the greedy solution. On each trial, a "target page" dt was selected at random from the corpus. One "source page" ds containing a link to the target was identified, and the reference flow 1st computed. The larger the reference flow, the stronger our model's expectation that there is a directed link from the source to the target. We ranked this flow against the reference flow to the target from 100 randomly chosen "distractor" pages dr1 , dr2 ... , drlOO . As seen in Figure 3, reference flow provides significant predictive power. Based on 2400 runs, the median rank for the "true source" was 27/100, versus a median rank of 50/100 for a "placebo" distractor chosen at random. Note that the dis tractors were not screened to ensure that they did not also contain links to the target; as such, some of the high-ranking dis tractors may also have been valid sources for the target in question. 5 Discussion and Related Work There have been many attempts to combine link and term information on web pages, though most approaches are ad hoc and have been aimed at increasing the retrieval of authoritative documents relevant to a given query. Bharat and Henzinger [1] provide a good overview of research in that area, as well as an algorithm that computes bibliometric authority after weighting links based on the relevance of the neighboring terms. The machine learning community has also recently taken an interest in the sort of relational models studied by Bibliometrics. Getoor et al. [5] describe a general framework for learning probabilistic relational models from a database, and present experiments in a variety of domains. In this paper, we have described a specific probabilistic model which attempts to explain both the contents and connections of documents in an unstructured document base. While we have demonstrated preliminary results in several application areas, this paper only scratches the surface of potential applications of a joint probabilistic document model. References [1] K. Bharat and M. R. Henzinger. Improved algorithms for topic distillation in hyperlinked environments. In Proceedings of the 21 st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 1998. [2] S. Brin and L. Page. The anatomy of a large-scale hypertextual web search engine. Technical report, Computer Science Department, Stanford University, 1998. [3] D. Cohn and H. Chang. Learning to probabilistically identify authoritative documents. In Proceedings of the 17th International Conference on Machine Learning, 2000. [4] S. Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman. Indexing by latent semantic analysis. J. of the American Society for Information Science , 41 :391--407, 1990. [5] L. Getoor, N. Friedman, D. Koller, and A. Pfeffer. Learning probabilistic relational models. In S. Dzeroski and N. Lavrac, editors, Relational Data Mining. Springer-Verlag, 2001. [6] M. Hearst. Multi-paragraph segmentation of expository text. In Proceedings of ACL, June 1994. [7] T. Hofmann. Probabilistic latent semantic analysis. In Proceedings of the 15th Conference on Uncertainty in AI, pages 289-296, 1999. [8] J. Kleinberg. Authoritative sources in a hyperlinked environment. In Proc. 9th ACM-SIAM Symposium on Discrete Algorithms, 1998. [9] D. D. Lee and H. S. Seung. Learning the parts of objects by non-negative matrix factorization. Nature, pages 788- 791,1999. [10] A. McCallum, K. Nigam, J. Rennie, and K. Seymore. Automating the construction of internet portals with machine learning. Information Retrieval Journal, 3: 127-163, 2000. [11] Web--->KB. Available electronically at http://www.es . emu. edu/ -WebKB/.
2000
142
1,801
NIPS '00 The Use of Classifiers in Sequential Inference Vasin Punyakanok Dan Roth Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL 61801 punyakan@cs.uiuc.edu danr@cs.uiuc.edu Abstract We study the problem of combining the outcomes of several different classifiers in a way that provides a coherent inference that satisfies some constraints. In particular, we develop two general approaches for an important subproblem - identifying phrase structure. The first is a Markovian approach that extends standard HMMs to allow the use of a rich observation structure and of general classifiers to model state-observation dependencies. The second is an extension of constraint satisfaction formalisms. We develop efficient combination algorithms under both models and study them experimentally in the context of shallow parsing. 1 Introduction In many situations it is necessary to make decisions that depend on the outcomes of several different classifiers in a way that provides a coherent inference that satisfies some constraints - the sequential nature of the data or other domain specific constraints. Consider, for example, the problem of chunking natural language sentences where the goal is to identify several kinds of phrases (e.g. noun phrases, verb phrases) in sentences. A task of this sort involves multiple predictions that interact in some way. For example, one way to address the problem is to utilize two classifiers for each phrase type, one of which recognizes the beginning of the phrase, and the other its end. Clearly, there are constraints over the predictions; for instance, phrases cannot overlap and there are probabilistic constraints over the order of phrases and their lengths. The above mentioned problem is an instance of a general class of problems - identifying the phrase structure in sequential data. This paper develops two general approaches for this class of problems by utilizing general classifiers and performing inferences with their outcomes. Our formalisms directly applies to natural language problems such as shallow parsing [7, 23, 5, 3, 21], computational biology problems such as identifying splice sites [8,4, 15], and problems in information extraction [9]. Our first approach is within a Markovian framework. In this case, classifiers are functions of the observation sequence and their outcomes represent states; we study two Markov models that are used as inference procedures and differ in the type of classifiers and the details of the probabilistic modeling. The critical shortcoming of this framework is that it attempts to maximize the likelihood of the state sequence - not the true performance measure of interest but only a derivative of it. The second approach extends a constraint satisfaction formalism to deal with variables that are associated with costs and shows how to use this to model the classifier combination problem. In this approach general constraints can be incorporated flexibly and algorithms can be developed that closely address the true global optimization criterion of interest. For both approaches we develop efficient combination algorithms that use general classifiers to yield the inference. The approaches are studied experimentally in the context of shallow parsing - the task of identifying syntactic sequences in sentences [14, 1, 11] - which has been found useful in many large-scale language processing applications including information extraction and text summarization [12, 2]. Working within a concrete task allows us to compare the approaches experimentally for phrase types such as base Noun Phrases (NPs) and SubjectVerb phrases (SVs) that differ significantly in their statistical properties, including length and internal dependencies. Thus, the robustness of the approaches to deviations from their assumptions can be evaluated. Our two main methods, projection-based Markov Models (PMM) and constraint satisfaction with classifiers (CSCL) are shown to perform very well on the task of predicting NP and SV phrases, with CSCL at least as good as any other method tried on these tasks. CSCL performs better than PMM on both tasks, more significantly so on the harder, SV, task. We attribute it to CSCL's ability to cope better with the length of the phrase and the long term dependencies. Our experiments make use of the SNoW classifier [6, 24] and we provide a way to combine its scores in a probabilistic framework; we also exhibit the improvements of the standard hidden Markov model (HMM) when allowing states to depend on a richer structure of the observation via the use of classifiers. 2 Identifying Phrase Structure The inference problem considered can be formalized as that of identifying the phrase structure of an input string. Given an input string 0 =< 01,02, .. . On >, a phrase is a substring of consecutive input symbols Oi, 0i+l, ... OJ. Some external mechanism is assumed to consistently (or stochastically) annotate substrings as phrases l . Our goal is to come up with a mechanism that, given an input string, identifies the phrases in this string. The identification mechanism works by using classifiers that attempt to recognize in the input string local signals which are indicative to the existence of a phrase. We assume that the outcome of the classifier at input symbol 0 can be represented as a function of the local context of 0 in the input string, perhaps with the aid of some external information inferred from it2. Classifiers can indicate that an input symbol 0 is inside or outside a phrase (10 modeling) or they can indicate that an input symbol 0 opens or closes a phrase (the OC modeling) or some combination of the two. Our work here focuses on OC modeling which has been shown to be more robust than the 10, especially with fairly long phrases [21]. In any case, the classifiers' outcomes can be combined to determine the phrases in the input string. This process, however, needs to satisfy some constraints for the resulting set of phrases to be legitimate. Several types of constraints, such as length, order and others can be formalized and incorporated into the approaches studied here. The goal is thus two fold: to learn classifiers that recognize the local signals and to combine them in a way that respects the constraints. We call the inference algorithm that combines the classifiers and outputs a coherent phrase structure a combinator. The performance of this process is measured by how accurately it retrieves the phrase structure of the input string. This is quantified in terms of recall - the percentage of phrases that are correctly identified - and precision - the percentage of identified phrases that are indeed correct phrases. 1 We assume here a single type of phrase, and thus each input symbol is either in a phrase or outside it. All the methods can be extended to deal with several kinds of phrases in a string. 2Jn the case of natural language processing, if the DiS are words in a sentence, additional information might include morphological information, part of speech tags, semantic class information from WordNet, etc. This information can be assumed to be encoded into the observed sequence. 3 Markov Modeling HMM is a probabilistic finite state automaton that models the probabilistic generation of sequential processes. It consists of a finite set S of states, a set 0 of observations, an initial state distribution Pl(s), a state-transition distribution P(sls') (s, s' E S) and an observation distribution P(ols) (0 EO, s E S). A sequence of observations is generated by first picking an initial state according to PI (s); this state produces an observation according to P(ols) and transits to a new state according to P(sls'). This state produces the next observation, and the process goes on until it reaches a designated final state [22]. In a supervised learning task, an observation sequence 0 =< 01,02,' .. On > is supervised by a corresponding state sequence S =< Sl, S2,'" sn >. This allows one to estimate the HMM parameters and then, given a new observation sequence, to identify the most likely corresponding state sequence. The supervision can also be supplied (see Sec. 2) using local signals from which the state sequence can be recovered. Constraints can be incorporated into the HMM by constraining the state transition probability distribution P(sls'). For example, set P( sis') = 0 for all s, s' such that the transition from s' to s is not allowed. 3.1 A Hidden Markov Model Combinator To recover the most likely state sequence in HMM, we wish to estimate all the required probability distributions. As in Sec. 2 we assume to have local signals that indicate the state. That is, we are given classifiers with states as their outcomes. Formally, we assume that Pt(slo) is given where t is the time step in the sequence. In order to use this information in the HMM framework, we compute Pt(ols) = Pt(slo)Pt(o)jPt(s). That is, instead of observing the conditional probability Pt(ols) directly from training data, we compute it from the classifiers' output. Notice that in HMM, the assumption is that the probability distributions are stationary. We can assume that for Pt(slo) which we obtain from the classifier but need not assume it for the other distributions, Pt(o) and Pt(s). Pt(s) can be calculated by Pt(s) = L:s/Es P(sIS')Pt-l(S') where Pl(s) and P(sls') are the two required distributions for the HMM. We still need Pt (0) which is harder to approximate but, for each t, can be treated as a constant 'fit because the goal is to find the most likely sequence of states for the given observations, which are the same for all compared sequences. With this scheme, we can still combine the classifiers' predictions by finding the most likely sequence for an observation sequence using dynamic programming. To do so, we incorporate the classifiers' opinions in its recursive step by computing P(Otls) as above: 8t(s) = max8t_l(s')P(sls')P(otls) = max8t- l(s')P(sls')P(slot)'fIt/Pt(s). s/ES s/ES This is derived using the HMM assumptions but utilizes the classifier outputs P(slo), allowing us to extend the notion of an observation. In Sec. 6 we estimate P(slo) based on a whole observation sequence rather than 0t to significantly improve the performance. 3.2 A Projection based Markov Model Combinator In HMMs, observations are allowed to depend only on the current state and long term dependencies are not modeled. Equivalently, the constraints structure is restricted by having a stationary probability distribution of a state given the previous one. We attempt to relax this by allowing the distribution of a state to depend, in addition to the previous state, on the observation. Formally, we now make the following independence assumption: P(StISt-l,St-2, ... ,Sl,Ot,Ot-l, ... ,od = P(stlst-l,ot). Thus, given an observation sequence 0 we can find the most likely state sequence S given 0 by maximizing n n t=2 t=2 Hence, this model generalizes the standard HMM by combining the state-transition probability and the observation probability into one function. The most likely state sequence can still be recovered using the dynamic programming (Viterbi) algorithm if we modify the recursive step: 8t ( s) = maxs'ES 8t - l (s')P( sis', Ot). In this model, the classifiers' decisions are incorporated in the terms P(sls', 0) and Pl(slo). To learn these classifiers we follow the projection approach [26] and separate P( sis', 0) to many functions Ps' (slo) according to the previous states s'. Hence as many as IS I classifiers, projected on the previous states, are separately trained. (Therefore the name "Projection based Markov model (PMM)".) Since these are simpler classifiers we hope that the performance will improve. As before, the question of what constitutes an observation is an issue. Sec. 6 exhibits the contribution of estimating Ps' (s 10) using a wider window in the observation sequence. 3.3 Related Work Several attempts to combine classifiers, mostly neural networks, into HMMs have been made in speech recognition works in the last decade [20]. A recent work [19] is similar to our PMM but is using maximum entropy classifiers. In both cases, the attempt to combine classifiers with Markov models is motivated by an attempt to improve the existing Markov models; the belief is that this would yield better generalization than the pure observation probability estimation from the training data. Our motivation is different. The starting point is the existence of general classifiers that provide some local information on the input sequence along with constraints on their outcomes; our goal is to use the classifiers to infer the phrase structure of the sequence in a way that satisfies the constraints. Using Markov models is only one possibility and, as mentioned earlier, not one the optimizes the real performance measure of interest. Technically, another novelty worth mentioning is that we use a wider range of observations instead of a single observation to predict a state. This certainly violates the assumption underlying HMMs but improves the performance. 4 Constraints Satisfaction with Classifiers This section describes a different model that is based on an extension of the Boolean constraint satisfaction (CSP) formalism [17] to handle variables that are the outcome of classifiers. As before, we assume an observed string 0 =< 01,02, . .. On > and local classifiers that, without loss of generality, take two distinct values, one indicating openning a phrase and a second indicating closing it (OC modeling). The classifiers provide their output in terms of the probability P(o) and P(c), given the observation. We extend the CSP formalism to deal with probabilistic variables (or, more generally, variables with cost) as follows. Let V be the set of Boolean variables associated with the problem, IVI = n. The constraints are encoded as clauses and, as in standard CSP modeling the Boolean CSP becomes a CNF (conjunctive normal form) formula f. Our problem, however, is not simply to find an assignment T : V -+ {O, I} that satisfies f but rather the following optimization problem. We associate a cost function c : V -+ R with each variable, and try to find a solution T of f of minimum cost, C(T) = E~=l T(Vi)C(Vi). One efficient way to use this general scheme is by encoding phrases as variables. Let E be the set of all possible phrases. Then, all the non-overlapping constraints can be encoded in: I\e; overlaps ej (-,ei V -,ej). This yields a quadratic number of variables, and the constraints are binary, encoding the restriction that phrases do not overlap. A satisfying assignment for the resulting 2-CNF formula can therefore be computed in polynomial time, but the corresponding optimization problem is still NP-hard [13]. For the specific case of phrase structure, however, we can find the optimal solution in linear time. The solution to the optimization problem corresponds to a shortest path in a directed acyclic graph constructed on the observations symbols, with legitimate phrases (the variables of the CSP) as its edges and their cost as the edges' weights. The construction of the graph takes quadratic time and corresponds to constructing the 2-CNF formula above. It is not hard to see (details omitted) that each path in this graph corresponds to a satisfying assignment and the shortest path corresponds to the optimal solution. The time complexity of this algorithm is linear in the size of the graph. The main difficulty here is to determine the cost C as a function of the confidence given by the classifiers. Our experiments revealed, though, that the algorithm is robust to reasonable modifications in the cost function. A natural cost function is to use the classifiers probabilities P(o) and P(c) and define, for a phrase e = (0, c), c(e) = 1 - P(o)P(c). The interpretation is that the error in selecting e is the error in selecting either 0 or c, and allowing those to overlap3. The constant in 1 - P(o)P(c) biases the minimization to prefers selecting a few phrases, so instead we minimize -P(o)P(c). 5 Shallow Parsing We use shallow parsing tasks in order to evaluate our approaches. Shallow parsing involves the identification of phrases or of words that participate in a syntactic relationship. The observation that shallow syntactic information can be extracted using local information by examining the pattern itself, its nearby context and the local part-of-speech information - has motivated the use of learning methods to recognize these patterns [7, 23, 3, 5]. In this work we study the identification of two types of phrases, base Noun Phrases (NP) and Subject Verb (SV) patterns. We chose these since they differ significantly in their structural and statistical properties and this allows us to study the robustness of our methods to several assumptions. As in previous work on this problem, this evaluation is concerned with identifying one layer NP and SV phrases, with no embedded phrases. We use the OC modeling and learn two classifiers; one predicting whether there should be an open in location t or not, and the other whether there should be a close in location t or not. For technical reasons the cases -'0 and -,c are separated according to whether we are inside or outside a phrase. Consequently, each classifier may output three possible outcomes 0, nOi, nOo (open, not open inside, not open outside) and C, nCi, nCo, resp. The statetransition diagram in figure 1 captures the order constraints. Our modeling of the problem is a modification of our earlier work on this topic that has been found to be quite successful compared to other learning methods attempted on this problem [21]. Figure 1: State-transition diagram for the phrase recognition problem. 5.1 Classification The classifier we use to learn the states as a function of the observation is SNoW [24, 6], a multi-class classifier that is specifically tailored for large scale learning tasks. The SNoW learning architecture learns a sparse network of linear functions, in which the targets (states, in this case) are represented as linear functions over a common features space. SNoW has already been used successfully for a variety of tasks in natural language and visual processing [10, 25]. Typically, SNoW is used as a classifier, and predicts using a winnertake-all mechanism over the activation value of the target classes. The activation value is computed using a sigmoid function over the linear SUll. In the current study we normalize the activation levels of all targets to sum to 1 and output the outcomes for all targets (states). We verified experimentally on the training data that the output for each state is indeed a distribution function and can be used in further processing as P(slo) (details omitted). 3It is also possible to account for the classifiers' suggestions inside each phrase; details omitted. 6 Experiments We experimented both with NPs and SVs and we show results for two different representations of the observations (that is, different feature sets for the classifiers) - part of speech (PaS) information only and pas with additional lexical information (words). The result of interest is F{3 = (f32 + 1) . Recall· Precision/ (f32 . Precision + Recall) (here f3 = 1). The data sets used are the standard data sets for this problem [23, 3, 21] taken from the Wall Street Journal corpus in the Penn Treebank [18]. For NP, the training and test corpus was prepared from sections 15 to 18 and section 20, respectively; the SV phrase corpus was prepared from sections 1 to 9 for training and section ° for testing. For each model we study three different classifiers. The simple classifier corresponds to the standard HMM in which P(ols) is estimated directly from the data. When the observations are in terms of lexical items, the data is too sparse to yield robust estimates and these entries were left empty. The NB (naive Bayes) and SNoW classifiers use the same feature set, conjunctions of size 3 of pas tags (PaS and words, resp.) in a window of size 6. Table 1: Results (F{3=l) of different methods on NP and SV recognition Method NP SV Model Classifier POS tags only POS tags+words POS tags only POS tags+words SNoW 90.64 92.89 64.15 77.54 HMM NB 90.50 92.26 75.40 78.43 Simple 87.83 64.85 SNoW 90.61 92.98 74.98 86.07 PMM NB 90.22 91.98 74.80 84.80 Simple 61.44 40.18 SNoW 90.87 92.88 85.36 90.09 CSCL NB 90.49 91.95 80.63 88.28 Simple 54.42 59.27 The first important observation is that the SV identification task is significantly more difficult than that the NP task. This is consistent across all models and feature sets. When comparing between different models and feature sets, it is clear that the simple HMM formalism is not competitive with the other two models. What is interesting here is the very significant sensitivity to the feature base of the classifiers used, despite the violation of the probabilistic assumptions. For the easier NP task, the HMM model is competitive with the others when the classifiers used are NB or SNoW. In particular, the fact that the significant improvements both probabilistic methods achieve when their input is given by SNoW confirms the claim that the output of SNoW can be used reliably as a probabilistic classifier. PMM and CSCL perform very well on predicting NP and SV phrases with CSCL at least as good as any other methods tried on these tasks. Both for NPs and SVs, CSCL performs better than the others, more significantly on the harder, SV, task. We attribute it to CSCL's ability to cope better with the length of the phrase and the long term dependencies. 7 Conclusion We have addressed the problem of combining the outcomes of several different classifiers in a way that provides a coherent inference that satisfies some constraints. This can be viewed as a concrete instantiation of the Learning to Reason framework [16]. The focus here is on an important subproblem, the identification of phrase structure. We presented two approachs: a probabilistic framework that extends HMMs in two ways and an approach that is based on an extension of the CSP formalism. In both cases we developed efficient combination algorithms and studied them empirically. It seems that the CSP formalisms can support the desired performance measure as well as complex constraints and dependencies more flexibly than the Markovian approach. This is supported by the experimental results that show that CSCL yields better results, in particular, for the more complex case of SV phrases. As a side effect, this work exhibits the use of general classifiers within a probabilistic framework. Future work includes extensions to deal with more general constraints by exploiting more general probabilistic structures and generalizing the CSP approach. Acknowledgments This research is supported by NSF grants IIS-9801638 and IIS-9984168. References [1] S. P. Abney. Parsing by chunks. In S. P. A. R. C. Berwick and C. Tenny, editors, Principle-based parsing: Computation and Psycho linguistics, IJages 257-278. Kluwer, Dordrecht, 1991. [2] D. Appelt, J. Hobbs, J. Bear, D. Israel, and Nt Tyson. FASTUS: A finite-state processor for information extraction from real-world text. In Proc. of IJCAl, 1993. [3] S. Argamon, 1. Dagan, and Y. Krymolowski. A memory-based approach to learning shallow natural language patterns. Journal of Experimental and Theoretical Artificial Intelligence, special issue on memory-based learning, 10:1- 22, 1999. [4] C. Burge and S. Karlin. Finding the genes in genomic DNA. Current Opinion in Structural Biology, 8:346- 354, 1998. [5] C. Cardie and D. Pierce. Error-driven pruning of treebanks grammars for base noun phrase identification. In Proceedings of ACL-98, pages 218- 224, 1998. [6] A. Carlson, C. Cumby, J. Rosen, and D. Roth. The SNoW learning architecture. Technical Report UillCDCS-R-99-2101, UillC Computer Science Department, May 1999. [7] K. W. Church. A stochastic parts program and noun phrase parser for unrestricted text. In Proc. of ACL Conference on Applied Natural Language Processing, 1988. [8] 1: W. Fickett. The gene identification problem: An overview for developers. Computers and Chemistry, 20:103- 118,1996. [9] D. Freitag and A. McCallum. Information extraction using HMMs and shrinkage. In Papers from the AAAJ-99 Workshop on Machine Learning for Information Extraction, 31- 36, 1999. [10] A. R. Golding and D. Roth. A Winnow based approach to context-sensitive spelling correction. Machine Learning, 34(1-3):107-130, 1999. [11] G. Greffenstette. Evaluation techniques for automatic semantic extraction: comparing semantic and window based approaches. In ACL'93 workshop on the Acquisition of Lexical Knowledge from Text, 1993. [12] R. Grishman. The NYU system for MUC-6 or where's syntax? In B. Sundheim, editor, Proceedings of the Sixth Message Understanding Conference. Morgan Kaufmann Publishers, 1995. [13] D. Gusfield and L. Pitt. A bounded approximation for the minimum cost 2-SAT problems. Algorithmica, 8:103-117, 1992. [14] Z. S. Harris. Co-occurrence and transformation in linguistic structure. Language, 33(3):283340,1957. [15] D. Haussler. Computational genefinding. Trends in Biochemical Sciences, Supplementary Guide to Bioinformatics, pages 12- 15, 1998. [16] R. Khardon and D. Roth. Learning to reason. J. ACM, 44(5):697- 725, Sept. 1997. [17] A. Mackworth. Constraint Satisfaction. In S. C. Shapiro, editor, Encyclopedia of Artificial Intelligence, pages 285- 293, 1992. Volume 1, second edition. [18] M. P. Marcus, B. Santorini, and M. Marcinkiewicz. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313- 330, June 1993. [19] A. McCallum, D. Freitag, and F. Pereira. Maximum entropy Markov models for information extraction and segmentation. In proceedings of ICML-2000, 2000. to appear. [20] N. Morgan and H. Bourlard. Continuous speech recognition. IEEE Signal Processing Magazine, 12(3):24-42, 1995. [21] M. Munoz, V. Punyakanok, D. Roth, and D. Zimak. A learning approach to shallow parsing. In EMNLP-VLC'99, 1999. [22] L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257- 285, 1989. [23] L. A. Ramshaw and M. P. Marcus. Text chunking using transformation-based learning. In Proceedings of the Third Annual Workshop on Very Large Corpora, 1995. [24] D. Roth. Learning to resolve natural language ambiguities: A unified approach. In Proceedings of the National Conference on Artificial Intelligence, pages 806- 813, 1998. [25] D. Roth, M.-H. Yang, and N. Ahuja. Learning to recognize objects. In CVPR'OO, The IEEE Conference on Computer Vision and Pattern Recognition, pages 724--731, 2000. [26] L. G. Valiant. Projection learning. In Proceedings of the Conference on Computational Learning Theory, pages 287- 293, 1998.
2000
143
1,802
Finding the Key to a Synapse Thomas Natschlager & Wolfgang Maass Institute for Theoretical Computer Science Technische Universitat Graz, Austria {tnatschl, maass}@igi.tu-graz.ac.at Abstract Experimental data have shown that synapses are heterogeneous: different synapses respond with different sequences of amplitudes of postsynaptic responses to the same spike train. Neither the role of synaptic dynamics itself nor the role of the heterogeneity of synaptic dynamics for computations in neural circuits is well understood. We present in this article methods that make it feasible to compute for a given synapse with known synaptic parameters the spike train that is optimally fitted to the synapse, for example in the sense that it produces the largest sum of postsynaptic responses. To our surprise we find that most of these optimally fitted spike trains match common firing patterns of specific types of neurons that are discussed in the literature. 1 Introduction A large number of experimental studies have shown that biological synapses have an inherent dynamics, which controls how the pattern of amplitudes of postsynaptic responses depends on the temporal pattern of the incoming spike train. Various quantitative models have been proposed involving a small number of characteristic parameters, that allow us to predict the response of a given synapse to a given spike train once proper values for these characteristic synaptic parameters have been found. The analysis of this article is based on the model of [1], where three parameters U, F, D control the dynamics of a synapse and a fourth parameter A - which corresponds to the synaptic "weight" in static synapse models - scales the absolute sizes of the postsynaptic responses. The resulting model predicts the amplitude Ak for the kth spike in a spike train with interspike intervals (lSI's) .601 , .602 , • .. ,.6ok-l through the equationsl Ak = A· Uk' Rk Uk = U +Uk-l(1- U)exp(-.6ok-dF) (1) Rk = 1 + (Rk-l - Uk-1Rk-l - 1) exp( -.6ok-d D) which involve two hidden dynamic variables U E [0,1] and R E [0,1] with the initial conditions Ul = U and Rl = 1 for the first spike. These dynamic variables evolve in dependence of the synaptic parameters U, F, D and the interspike intervals of the incoming ITo be precise: the term Uk-1Rk-l in Eq. (1) was erroneously replaced by ukRk-l in the corresponding Eq. (2) of [1]. The model that they actually fitted to their data is the model considered in this article. A B input spike train I IIIIII IIIIIII IIIII III 0.75 u fA I I fi I II m! I I ] I II ! 0.5 C 0.25 F2 I lUI I ~ lIn! Un I II 0.75 a 2 3 4 5 o 0 time [sec) Figure 1: Synaptic heterogeneity. A The parameters U, D, and F can be determined for biological synapses. Shown is the distribution of values for inhibitory synapses investigated in [2] which can be grouped into three mayor classes: facilitating (Ft), depressing (F2) and recovering (F3). B Synapses produce quite different outputs for the same input for different values of the parameters U, D, and F. Shown are the amplitudes Uk • Rk (height of vertical bar) of the postsynaptic response of a FI-type and a F2-type synapse to an irregular input spike train. The parameters for synapses fA and F2 are the mean values for the synapse types FI and F2 reported in [2]: (U, D, F) = (0.16,45 msec, 376 msec) for FI , and (0.25,706 msec, 21 msec) for F2 • spike train.2 It is reported in [2] that the synaptic parameters U, F, D are quite heterogeneous, even within a single neural circuit (see Fig. IA). Note that the time constants D and F are in the range of a few hundred msec. The synapses investigated in [2] can be grouped into three major classes: facilitating (FI), depressing (F2) and recovering (F3). Fig. IB compares the output of a typical FI-type and a typical F2-type synapse in response to a typical irregular spike train. One can see that the same input spike train yields markedly different outputs at these two synapses. In this article we address the question which temporal pattern of a spike train is optimally fitted to a given synapse characterized by the three parameters U, F, D in a certain sense. One possible choice is to look for the temporal pattern of a spike train which produces the largest integral of synaptic current. Note that in the case where the dendritic integration is approximately linear the integral of synaptic current is proportional to the sum 'E~= l A . Uk . Rk of postsynaptic responses. We would like to stress, that the computational methods we will present are not restricted to any particular choice of the optimality criterion. For example one can use them also to compute the spike train which produces the largest peak of the postsynaptic membrane voltage. However, in the following we will focus on the question which temporal pattern of a spike train produces the largest sum 'E~=l A· Uk . Rk of postsynaptic responses (or equivalently the largest integral of postsynaptic current). More precisely, we fix a time interval T , a minimum value ~min for lSI's, a natural number N , and synaptic parameters U, F, D . We then look for that spike train with N spikes during T and lSI's 2:: ~min that maximizes 'E~=l A· Uk' Rk. Hence we seek for a solutionthat is a sequence ofISI's ~l' ~2' ... , ~N-I to the optimization problem N N-I maximize LA. Uk . Rk under L ~k ~ T and ~min ~ ~k' 1 ~ k < N . (2) k=l k=l In Section 2 of this article we present an algorithmic approach based on dynamic program2It should be noted that this deterministic model predicts the cumulative response of a population of stochastic release sites that make up a synaptic connection. ming that is guaranteed to find the optimal solution of this problem (up to discretization errors), and exhibit for major types of synapses temporal patterns of spike trains that are optimally fitted to these synapses. In Section 3 we present a faster heuristic method for computing optimally fitted spike trains, and apply it to analyze how their temporal pattern depends on the number N of allowed spikes during time interval T, i.e., on the firing rate f = NIT. Furthermore we analyze in Section 3 how changes in the synaptic parameters U, F, D affect the temporal pattern of the optimally fitted spike train. 2 Computing Optimal Spike Trains for Common Types of Synapses Dynamic Programming For T = 1000 msec and N = 10 there are about 2100 spike trains among which one wants to find the optimally fitted one. We show that a computationally feasible solution to this complex optimization problem can be achieved via dynamic programming. We refer to [3] for the mathematical background of this technique, which also underlies the computation of optimal policies in reinforcement learning. We consider the discrete time dynamic system described by the equation Xl = (U, 1, 0) and Xk+1 = g(Xk, ak) for k = 1, ... , N - 1 (3) where Xk describes the state of the system at step k, and ak is the "control" or "action" taken at step k. In our case Xk is the triple (Uk, Rk, tk) consisting of the values of the dynamic variables U and R used to calculate the amplitude A . Uk . Rk of the kth postsynaptic response, and the time tk of the arrival of the kth spike at the synapse. The "action" ak is the length Ilk E [Ilmin , T tkJ of the kth lSI in the spike train that we construct, where Ilmin is the smallest possible size of an lSI (we have set Ilmin = 5 msec in our computations). As the function gin Eq. (3) we take the function which maps (Uk, Rk, tk) and Ilk via Eq. (1) on (uk+l,Rk+l,tk+1) for tk+1 = tk + Ilk. The "reward" for the kth spike is A . Uk . Rk, i.e., the amplitude of the postsynaptic response for the kth spike. Hence maximizing the total reward J(Xl) = 2:~=1 A· Uk· Rk is equivalent to solving the maximization problem (2). The maximal possible value of Jl (Xl) can be computed exactly via the equations IN(XN) = A· UN· RN Jk(Xk) = max (A· Uk· Rk + Jk+1(g(Xk, Il))) ~E[~min,T-tkl (4) backwards from k = N - 1 to k = 1. Thus the optimal sequence al, ... , aN-l of "actions" is the sequence Ill, .. . , IlN -1 of lSI's that achieves the maximal possible value of 2:~=1 A . Uk . Rk . Note that the evaluation of Jk(Xk) for a single value of Xk requires the evaluation of Jk+1 (Xk+1) for many different values of Xk+1.3 The "Key" to a Synapse We have applied the dynamic programming approach to three major types of synapses reported in [2]. The results are summarized in Fig. 2 to Fig. 5. We refer informally to the temporal pattern of N spikes that maximizes the response of a particular synapse as the "key" to this synapse. It is shown in Fig. 3 that the "keys" for the inhibitory synapses Fi and F2 are rather specific in the sense that they exhibit a substantially smaller postsynaptic response on any other of the major types of inhibitory synapses reported in [2]. The specificity of a "key" to a synapse is most pronounced for spiking frequencies f below 20 Hz. One may speculate that due to this feature a neuron can activate even without changing its firing rate a particular subpopulation of its target neurons by generating a series of action potentials with a suitable temporal pattern, see 3When one solves Eq. (4) on a computer, one has to replace the continuous state variable Xk by a discrete variable Xk , and round Xk+l := g(Xk'~) to the nearest value of the corresponding discrete variable Xk+l. For more details about the discretization of the model we refer the reader to [4]. 0.75 ~ .!!!.. 0.5 o 0.25 i.~ 0.5 0.25 a a u Fl III I I I I I I I I I I I II F3 II o 0.2 0.4 time [sec] I 0.6 I 0.8 Figure 2: Spike trains that maximize the sum of postsynaptic responses for three common types of synapses (T = 0.8 sec, N = 15 spikes). The parameters for synapses Fi, F2, and F3 are the mean values for the synapse types FI, F2 and F3 reported in [2] : (U, D, F} = (0.16,45 msec, 376 msec} for Fl , (0.25,706 msec, 21 msec} for F2 , and (0.32, 144 msec, 62 msec} for F3 . key to synapse Fl response of a response of a Fl-type synapse F2-type synapse II I I I I I I I I I I I I I 81% I key to synapse F2 I ({/% IIIII IIII II Figure 3: Specificity of optimal spike trains. The optimal spike trains for synapses Fl and F2 the "keys" to the synapses Fl and F2 obtained for T = 0.8 sec and N = 15 spikes are tested on the synapses Fl and F2 . If the "key" to synapse Fl (F2 ) is tested on the synapse Fl (F2 ) this synapse produces the maximal (l00 %) postsynaptic response. If on the other hand the "key" to synapse Fl (F2) is tested on synapse F2 (Fl) this synapse produces significantly less postsynaptic response. Fig. 4. Recent experiments [5, 6] show that neuromodulators can control the firing mode of cortical neurons. In [5] it is shown that bursting neurons may switch to regular firing if norepinephine is applied. Together with the specificity of synapses to certain temporal patterns these findings point to one possible mechanism how neuromodulators can change the effective connectivity of a neural circuit. Relation to discharge patterns A noteworthy aspect of the "keys" shown in Fig. 2 (and in Fig. 6 and Fig. 7) is that they correspond to common firing patterns ("accommodating", "non-accommodating", "stuttering", "bursting" and "regular firing") of neocortical interneurons reported under controlled conditions in vitro [2, 5] and in vivo [7]. For example the temporal patterns of the "keys" to the synapses Fl , F2, and F3 are similar to the discharge patterns of "accommodating" [2], "bursting" [5, 7], and "stuttering" [2] cells respectively. What is the role of the parameter A? Another interesting effect arises if one compares N the optimal values of the sum Ek=l Uk . Rk (i.e. A = 1) for synapses H, F 2 , and F3 (see Fig. 5A) with the maximal values of E~=l A . Uk • Rk (see Fig. 5B), where we have set synaptic response synaptic response Fl lo-key to synapse Fl 11111111 I I I I I I I key to synapse F2 Fl i 0--te--=III -----....:11=---------=.11_-=-1 ~< I F2 (1Figure 4: Preferential addressing of postsynaptic targets. Due to the specificity of a "key" to a synapse a presynaptic neuron may address (i.e. evoke stronger response at) either neuron A or B, depending on the temporal pattern of the spike train (with the same frequency f = NIT) it produces (T = 0.8 sec and N = 15 in this example). A 4 ,----A::1 A::1 A::1 o B 15 (3 ~10 o ,----A::3.24 r----A=7.76 A:3.44 Fl F2 F3 Fl F2 F3 Figure 5: A Absolute values of the sums 2::=1 Uk . Rk if the key to synapse Pi is applied to synapse Pi, i = 1,2,3. B Same as panel A except that the value of 2::=1 A . Uk . Rk is plotted. For A we used the value of Gmax (in nS) reported in [2]. The quotient max I min is 1.3 compared to 2.13 in panel A. A equal to the value of Gmax reported in [2]. Whereas the values of Gmax vary strongly among different synapse types (see Fig. 5B), the resulting maximal response of a synapse to its proper "key" is almost the same for each synapse. Hence, one may speculate that the system is designed in such a way that each synapse should have an equal influence on the postsynaptic neuron when it receives its optimal spike train. However, this effect is most evident for a spiking frequency f = NIT of 10 Hz and vanishes for higher frequencies. 3 Exploring the Parameter Space Sequential Quadratic Programming The numerical approach for approximately computing optimal spike trains that was used in section 2 is sufficiently fast so that an average PC can carry out any of the computations whose results were reported in Fig. 2 within a few hours. To be able to address computationally more expensive issues we used a a nonlinear optimization algorithm known as "sequential quadratic programming" (SQP)4 which is the state of the art approach for heuristically solving constrained optimization problems such as (2). We refer the reader to [8] for the mathematical background of this technique and to [4] for more details about the application of SQP for approximately computing optimal spike trains. Optimal Spike Trains for Different Firing Rates First we used SQP to explore the effect of the spike frequency f = N IT on the temporal pattern of the optimal spike train. For the synapses PI, P2 , and Pa we computed the optimal spike trains for frequencies 4We used the implementation (function constr) which is contained in the MATLAB Optimization Toolbox (see http : //www . ma thworks . com/products/ optimiz ation/). keys to Fl synapse keys to F2 synapse keys to F3 synapse h 40 11 111111 111 1111111111111111111111111 h 40 ~ I I 1 I I I I h 40 111111111 11111 1111 111 1111 11111 11111 11 -~ 35 111 11 1111 11 111111 11111 11 11 11 -~ 35 1 I I I 1 1 1 I -~ 35 11 111111 11111 11111 111111 111111 !. 30 11 111 11111 11111 111111 III II I I I 1 1 1 II 11 11 1111 111 111111 111 1111 "-. 30 I "-. 30 g 25 111 11111 1 11 111111 1 II I C 25 1 I I C 25 111 111 111 1111 111 1111 t:: t:: ~ 20 11 1111 1 1 1 1 111 1 1 I I Q) Q) II II II II I I II I I I g. 20 1 g. 20 ~ 15 1111 11 1 111 1 I I ~ 15 1 Q) <./:: 15 0 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 0 0.2 0.4 0.6 0.8 time [sec] time [sec] time [sec] Figure 6: Dependence of the optimal spike train of the synapses FL F2 , and F3 on the spike frequency f = NIT (T = 1 sec, N = 15, ... ,40). 0.60 0.50 0.45 I I I I I I I I I I I I I I I I II 0.40 I I I I I I I I I I I I I I I I II 0.35 ::) 0.30 I I I I I I I I I I I I I I I I II II II II II II II II I II 0.25 II II II II II II II III 0.20 I III III III III 1111 0.15 1111 1111 1111 0.10 11111 11111 11111 i i i i 0 0.25 0 .5 0.75 time [sec] Figure 7: Dependence of the optimal spike train on the synaptic parameter U. It is shown how the optimal spike train changes if the parameter U is varied. The other two parameters are set to the value corresponding to synapse F3: D = 144 msec and F = 62 msec. The black bar to the left marks the range of values (mean ± std) reported in [2] for the parameter U. To the right of each spike train we have plotted the corresponding value of J = Ef=i ukRk (gray bars). ranging from 15 Hz to 40 Hz. The results are summarized in Fig. 6. For synapses Fi and F2 the characteristic spike pattern (Fi ... accommodating, F2 ... stuttering) is the same for all frequencies. In contrast, the optimal spike train for synapse F3 has a phase transition from "stuttering" to "non-accommodating" at about 20 Hz. The Impact of Individual Synaptic Parameters We will now address the question how the optimal spike train depends on the individual synaptic parameters U, F, and D. The results for the case of F3-type synapses and the parameter U are summarized in Fig. 7. For results with regard to other parameters and synapse types we refer to [4]. We have marked in Fig. 7 with a black bar the range of U for F3-type synapses reported in [2]. It can be seen that within this parameter range we find "regu]ar" and "bursting" spike patterns. Note that the sum of postsynaptic responses J (gray horizontal bars in Fig. 7) is not proportional to U. While U increases from 0.1 to 0.6 (6 fold change) J only increases by a factor of 2. This seems to be interesting since the parameter U is closely related to the initial release probability of a synapse, and it is a common assumption that the "strength" of a synapse is proportional to its initial release probability. 4 Discussion We have presented two complementary computational approaches for computing spike trains that optimize a given response criterion for a given synapse. One of these methods is based on dynamic programming (similar as in reinforcement learning), the other one on sequential quadratic programming. These computational methods are not restricted to any particular choice of the optimality criterion and the synaptic model. In [4] applications of these methods to other optimality criteria, e.g. maximizing the specificity, are discussed. It turns out that the spike trains that maximize the response of Fl-, F2- and F3-type synapses (see Fig. 1) are well known firing patterns like "accommodating", "bursting" and "regular firing" of specific neuron types. Furthermore for Fl- and F3-type synapses the optimal spike train agrees with the most often found firing pattern of presynaptic neurons reported in [2], whereas for F2-type synapses there is no such agreement; see [4]. This observation provides the first glimpse at a possible functional role of the specific combinations of synapse types and neuron types that was recently found in [2]. Another noteworthy aspect of the optimal spike trains is their specificity for a given synapse (see Fig. 3).: suitable temporal firing patterns activate preferentially specific types of synapses. One potential functional role of such specificity to temporal firing patterns is the possibility of preferential addressing of postsynaptic target neurons (see Fig. 4). Note that there is experimental evidence that cortical neurons can switch their intrinsic firing behavior from "bursting" to "regular" depending on neuromodulator mediated inputs [5, 6]. This findings provide support for the idea of preferential addressing of postsynaptic targets implemented by the interplay of dynamic synapses and the intrinsic firing behavior of the presynaptic neuron. Furthermore our analysis provides the platform for a deeper understanding of the specific role of different synaptic parameters, because with the help of the computational techniques that we have introduced one can now see directly how the temporal structure of the optimal spike train for a synapse depends on the individual synaptic parameters. We believe that this inverse analysis is essential for understanding the computational role of neural circuits. References [1] H. Markram, Y. Wang, and M. Tsodyks. Differential signaling via the same axon of neocortical pyramidal neurons. Proc. Natl. Acad. Sci. , 95:5323- 5328, 1998. [2] A. Gupta, Y. Wang, and H. Markram. Organizing principles for a diversity of GABAergic interneurons and synapses in the neocortex. Science, 287:273- 278, 2000. [3] D. P. Bertsekas. Dynamic Programming and Optimal Control, Volume 1. Athena Scientific, Belmont, Massachusetts, 1995. [4] T. Natschlager and W. Maass. Computing the optimally fitted spike train for a synapse. submitted for publication, electronically available via http : //www. igi . TUGra z .at/igi/ tnatschl/psfiles/synkey-journal. ps . gz, 2000. [5] Z. Wang and D. A. McCormick. Control of firing mode of corticotectal and corticopontine layer V burst generating neurons by norepinephrine. Journal of Neuroscience, 13(5):2199-2216, 1993. [6] J. C. Brumberg, L. G. Nowak, and D. A. McCormick. Ionic mechanisms underlying repetitive high frequency burst firing in supragranular cortical neurons. Journal of Neuroscience, 20(1):4829-4843, 2000. [7] M. Steriade, I. Timofeev, N. Diirmiiller, and F. Grenier. Dynamic properties of corticothalamic neurons and local cortical interneurons generating fast rhytmic (30-40 hz) spike bursts. Journal of Neurophysiology, 79:483-490, 1998. [8] M. J. D. Powell. Variable metric methods for constrained optimization. In A. Bachem, M Grotschel, and B. Korte, editors, Mathematical Programming: The State of the Art, pages 288- 311. Springer Verlag, 1983.
2000
144
1,803
The Unscented Particle Filter Rudolph van der Merwe Oregon Graduate Institute Electrical and Computer Engineering P.O. Box 91000,Portland,OR 97006, USA rvdmerwe@ece.ogi.edu N ando de Freitas Arnaud Doucet Cambridge University Engineering Department Cambridge CB2 1PZ, England ad2@eng.cam.ac.uk Eric Wan UC Berkeley, Computer Science 387 Soda Hall, Berkeley CA 94720-1776 USA jfgf@cs.berkeley.edu Oregon Graduate Institute Electrical and Computer Engineering P.O. Box 91000,Portland,OR 97006, USA ericwan@ece.ogi.edu Abstract In this paper, we propose a new particle filter based on sequential importance sampling. The algorithm uses a bank of unscented filters to obtain the importance proposal distribution. This proposal has two very "nice" properties. Firstly, it makes efficient use of the latest available information and, secondly, it can have heavy tails. As a result, we find that the algorithm outperforms standard particle filtering and other nonlinear filtering methods very substantially. This experimental finding is in agreement with the theoretical convergence proof for the algorithm. The algorithm also includes resampling and (possibly) Markov chain Monte Carlo (MCMC) steps. 1 Introduction Filtering is the problem of estimating the states (parameters or hidden variables) of a system as a set of observations becomes available on-line. This problem is of paramount importance in many fields of science, engineering and finance. To solve it, one begins by modelling the evolution of the system and the noise in the measurements. The resulting models typically exhibit complex nonlinearities and non-Gaussian distributions, thus precluding analytical solution. The best known algorithm to solve the problem of non-Gaussian, nonlinear filtering (filtering for short) is the extended Kalman filter (Anderson and Moore 1979). This filter is based upon the principle of linearising the measurements and evolution models using Taylor series expansions. The series approximations in the EKF algorithm can, however, lead to poor representations of the nonlinear functions and probability distributions of interest. As as result, this filter can diverge. Recently, Julier and Uhlmann (Julier and Uhlmann 1997) have introduced a filter founded on the intuition that it is easier to approximate a Gaussian distribution than it is to approximate arbitrary nonlinear functions. They named this filter the unscented Kalman filter (UKF). They have shown that the UKF leads to more accurate results than the EKF and that in particular it generates much better estimates of the covariance of the states (the EKF seems to underestimate this quantity). The UKF has, however, the limitation that it does not apply to general non-Gaussian distributions. Another popular solution strategy for the general filtering problem is to use sequential Monte Carlo methods, also known as particle filters (PFs): see for example (Doucet, Godsill and Andrieu 2000, Doucet, de Freitas and Gordon 2001, Gordon, Salmond and Smith 1993). These methods allow for a complete representation of the posterior distribution of the states, so that any statistical estimates, such as the mean, modes, kurtosis and variance, can be easily computed. They can therefore, deal with any nonlinearities or distributions. PFs rely on importance sampling and, as a result, require the design of proposal distributions that can approximate the posterior distribution reasonably welL In general, it is hard to design such proposals. The most common strategy is to sample from the probabilistic model of the states evolution (transition prior). This strategy can, however, fail if the new measurements appear in the tail of the prior or if the likelihood is too peaked in comparison to the prior. This situation does indeed arise in several areas of engineering and finance, where one can encounter sensors that are very accurate (peaked likelihoods) or data that undergoes sudden changes (nonstationarities): see for example (Pitt and Shephard 1999, Thrun 2000). To overcome this problem, several techniques based on linearisation have been proposed in the literature (de Freitas 1999, de Freitas, Niranjan, Gee and Doucet 2000, Doucet et aL 2000, Pitt and Shephard 1999). For example, in (de Freitas et aL 2000), the EKF Gaussian approximation is used as the proposal distribution for a PF. In this paper, we follow the same approach, but replace the EKF proposal by a UKF proposal. The resulting filter should perform better not only because the UKF is more accurate, but because it also allows one to control the rate at which the tails of the proposal distribution go to zero. It becomes thus possible to adopt heavier tailed distributions as proposals and, consequently, obtain better importance samplers (Gelman, Carlin, Stern and Rubin 1995). Readers are encouraged to consult our technical report for further results and implementation details (van der Merwe, Doucet, de Freitas and Wan 2000)1. 2 Dynamic State Space Model We apply our algorithm to general state space models consisting of a transition equation p(Xt IXt-d and a measurement equation p(Yt IXt). That is, the states follow a Markov process and the observations are assumed to be independent given the states. For example, if we are interested in nonlinear, non-Gaussian regression, the model can be expressed as follows Xt f(Xt-1, Vt-1) Yt = h(ut,xt,nt) where Ut E Rnu denotes the input data at time t, Xt E Rnz denotes the states (or parameters) of the model, Yt E Rny the observations, Vt E Rnv the process noise and nt E Rnn the measurement noise. The mappings f : Rnz x Rnv r-+ Rnz and h : (Rnz x Rnu) x Rnn r-+ Rny represent the deterministic process and measurement models. To complete the specification ofthe model, the prior distribution (at t = 0) lThe TR and software are available at http://www.cs.berkeley.edurjfgf . is denoted by p(xo). Our goal will be to approximate the posterior distribution p(xo:tIYl:t) and one of its marginals, the filtering density p(XtIYl:t), where Yl:t = {Yl, Y2, ... ,yd· By computing the filtering density recursively, we do not need to keep track of the complete history of the states. 3 Particle Filtering Particle filters allow us to approximate the posterior distribution P (xo:t I Yl:t) using a set of N weighted samples (particles) {x~~L i = 1, ... , N}, which are drawn from an importance proposal distribution q(xo:tIYl:t). These samples are propagated in time as shown in Figure 1. In doing so, it becomes possible to map intractable integration problems (such as computing expectations and marginal distributions) to easy summations. This is done in a rigorous setting that ensures convergence according to the strong law of large numbers where ~ denotes almost sure convergence and it : IRn~ -t IRn't is some function of interest. For example, it could be the conditional mean, in which case it (xo:t) = XO:t, or the conditional covariance of Xt with it (xo:t) = XtX~ i= 1, ... ,N= 10 particles o 0 0 0 o 000 0 , " it tf' ! i 1 h lh j 1 •• {x(i) w(i)} t·1' t· 1 Figure 1: In this example, a particle filter starts at time t - 1 with an unweighted measure {X~~l' N- 1 }, which provides an approximation of p(Xt-lIYl:t-2). For each particle we compute the importance weights using the information at time t 1. This results in the weighted measure {x~~l!W~~l}' which yields an approximation p(xt-lIYl:t-l). Subsequently, a resampling step selects only the "fittest" particles to obtain the unweighted measure {X~~l' N-1}, which is still an approximation of p(Xt-lIYl:t-l) . Finally, the sampling (prediction) step introduces variety, resulting in the measure {x~i), N-l}. Fp(x,lyu) [Xt]I8:'p(x,lyu) [Xt]. A Generic PF algorithm involves the following steps. Generic PF 1. Sequential importance sampling step • For i = 1, ... ,N. sample x~il '" q(XtIX~~L1,Yl:t) and update the trajectories -til A. (-(il (il ) xo:t x t ,xO:t-1 • For i = 1, ... ,N. evaluate the importance weights up to a normalizing constant: (-(il I ) (il _ P xo:t Yl:t w t (-(il I (il ) (-(il I ) q x t XO:t - 1' Y1:t P XO:t - 1 Y1:t-1 F . 1 N I· h . h -til _ (,l [",N (Jl] -1 • or ~ = , ... , . norma Ize t e welg ts: Wt Wt L.JJ=1 Wt 2. Selection step • Multiply/suppress samples (x~i~) with high/low importance weights w~il. respectively. to obtain N random samples (x~i~) approximately distributed according to p(X~~~IY1:t). 3. MCMC step • Apply a Markov transition kernel with invariant distribution given by p(x~~~IYl:t) to obtain (x~i ~). • In the above algorithm, we can restrict ourselves to importance functions of the t form q(xo:tIYl:t) = q(xo) II q(xkIY1:k,X1:k-I) to obtain a recursive formula to k=1 evaluate the importance weights P (Yt I YI:t-l, xo:t) P (Xt I Xt-I) Wt CX q (Xt I Yl:t, Xl:t-I) There are infinitely many possible choices for q (xo:tl Yl:t), the only condition being that its support must include that of p(xo:tIYl:t). The simplest choice is to just sample from the prior, P (Xt I Xt- I), in which case the importance weight is equal to the likelihood, P (Ytl YI:t-l, xO:t). This is the most widely used distribution, since it is simple to compute, but it can be inefficient, since it ignores the most recent evidence, Yt. The selection (resampling) step is used to eliminate the particles having low importance weights and to multiply particles having high importance weights (Gordon et al. 1993). This is done by mapping the weighted measure {x~i) ,w~i)} to an unweighted measure {x~i), N-I} that provides an approximation of p(xtIYl:t). After the selection scheme at time t, we obtain N particles distributed marginally approximately according to p(xo:tIYl:t). One can, therefore, apply a Markov kernel (for example, a Metropolis or Gibbs kernel) to each particle and the resulting distribution will still be p(xo:t IYl:t). This step usually allows us to obtain better results and to treat more complex models (de Freitas 1999). 4 The Unscented Particle Filter As mentioned earlier, using the transition prior as proposal distribution can be inefficient. As illustrated in Figure 2, if we fail to use the latest available information to propose new values for the states, only a few particles might survive. It is therefore of paramount importance to move the particles towards the regions of high likelihood. To achieve this, we propose to use the unscented filter as proposal distribution. This simply requires that we propagate the sufficient statistics of the UKF for each particle. For exact details, please refer to our technical report (van der Merwe et al. 2000). Prior Likelihood • • ••••••• • • • • • • Figure 2: The UKF proposal distribution allows us to move the samples in the prior to regions of high likelihood. This is of paramount importance if the likelihood happens to lie in one of the tails of the prior distribution, or if it is too narrow (low measurement error). 5 Theoretical Convergence Let B (l~n) be the space of bounded, Borel measurable functions on ~n. We denote Ilfll ~ sup If (x) I. The following theorem is a straightforward extension of previous xERn results in (Crisan and Doucet 2000). Theorem 1 If the importance weight P (Yt I Xt) P (Xt I Xt-l) Wt CX q (Xt I XO:t-l, Yl:t) (1) is upper bounded for any (Xt-l,yt), then, for all t ~ 0, there exists Ct independent of N, such that for any ft E B (~n~x(t+l)) (2) The expectation in equation 2 is with respect to the randomness introduced by the particle filtering algorithm. This convergence result shows that, under very lose assumptions, convergence of the (unscented) particle filter is ensured and that the convergence rate of the method is independent of the dimension of the state-space. The only crucial assumption is to ensure that Wt is upper bounded, that is that the proposal distribution q (Xt I XO:t-l, Yl:t) has heavier tails than P (Yt I Xt) P (Xtl Xt-t). Considering this theoretical result, it is not surprising that the UKF (which has heavier tails than the EKF) can yield better estimates. 6 Demonstration For this experiment, a time-series is generated by the following process model Xt+! = 1 + sin(w7rt) + ¢Xt + Vt, where Vt is a Gamma(3,2) random variable modeling the process noise, and W = 4e - 2 and ¢ = 0.5 are scalar parameters. A non-stationary observation model, t S 30 t> 30 is used. The observation noise, nt, is drawn from a zero-mean Gaussian distribution. Given only the noisy observations, Yt, a few different filters were used to estimate the underlying clean state sequence Xt for t = 1 ... 60. The experiment was repeated 100 times with random re-initialization for each run. All of the particle filters used 200 particles. Table 1 summarizes the performance of the different filters. The Algorithm MSE mean var Extended Kalman Filter (EKFl 0.374 0.015 Unscented Kalman Filter (UKF) 0.280 0.012 Particle Filter : generic 0.424 0.053 Particle Filter: MCMC move step 0.417 0.055 Particle Filter : EKF proposal 0.310 0.016 Particle Filter: EKF proposal and MCMC move step 0.307 0.015 Particle Filter : UKF proposal (" Unscented Particle Filter") 0.070 0.006 Particle Filter: UKF proposal and MCMC move step 0.074 0.008 Table 1: Mean and variance of the MSE calculated over 100 independent runs. table shows the means and variances of the mean-square-error (MSE) of the state estimates. Note that MCMC could improve results in other situations. Figure 3 compares the estimates generated from a single run of the different particle filters. The superior performance of the unscented particle filter is clearly evident. Figure 'O~--~ ' O----~2~O----~30-----4~O----~W----~ro· Time Figure 3: Plot of the state estimates generated by different filters. 4 shows the estimates of the state covariance generated by a stand-alone EKF and UKF for this problem. Notice how the EKF's estimates are consistently smaller than those generated by the UKF. This property makes the UKF better suited than the EKF for proposal distribution generation within the particle filter framework. Estimates of state covariance I-- EKF I UKF 10"" I I , ..... , .. , "'-- ... -.-- ... ---', 'O~O:--------":'0"----------:20::--------,3":-0-------":40-------:5'::-0 ------:"0 time Figure 4: EKF and UKF estimates of state covariance. 7 Conclusions We proposed a new particle filter that uses unscented filters as proposal distributions. The convergence proof and empirical evidence, clearly, demonstrate that this algorithm can lead to substantial improvements over other nonlinear filtering algorithms. The algorithm is well suited for engineering applications, when the sensors are very accurate but nonlinear, and financial time series, where outliers and heavy tailed distributions play a significant role in the analysis of the data. For further details and experiments, please refer to our report (van der Merwe et al. 2000). References Anderson, B. D. and Moore, J. B. (1979). Optimal Filtering, Prentice-Hall, New Jersey. Crisan, D. and Doucet, A. (2000). Convergence of generalized particle filters, Technical Report CUED/F-INFENG/TR 381, Cambridge University Engineering Department. de Freitas, J. F. G. (1999). Bayesian Methods for Neural Networks, PhD thesis, Department of Engineering, Cambridge University, Cambridge, UK de Freitas, J. F. G., Niranjan, M., Gee, A. H. and Doucet, A. (2000). Sequential Monte Carlo methods to train neural network models, Neural Computation 12(4): 955- 993. Doucet, A., de Freitas, J. F. G. and Gordon, N. J. (eds) (2001). Sequential Monte Carlo Methods in Practice, Springer-Verlag. Doucet, A., Godsill, S. and Andrieu, C. (2000). On sequential Monte Carlo sampling methods for Bayesian filtering, Statistics and Computing 10(3): 197- 208. Gelman, A., Carlin, J. B., Stern, H. S. and Rubin, D. B. (1995). Bayesian Data Analysis, Chapman and Hall. Gordon, N. J., Salmond, D. J. and Smith, A. F. M. (1993). Novel approach to nonlinear/non-Gaussian Bayesian state estimation, lEE Proceedings-F 140(2): 107113. Julier, S. J. and Uhlmann, J. K (1997). A new extension of the Kalman filter to nonlinear systems, Proc. of AeroSense: The 11th International Symposium on Aerospace/Defence Sensing, Simulation and Controls, Orlando, Florida. , Vol. Multi Sensor Fusion, Tracking and Resource Management II. Pitt, M. K and Shephard, N. (1999). Filtering via simulation: Auxiliary particle filters, Journal of the American Statistical Association 94(446): 590- 599. Thrun, S. (2000). Monte Carlo POMDPs, in S. Solla, T. Leen and K-R. Miiller (eds), Advances in Neural Information Processing Systems 12, MIT Press, pp. 1064- 1070. van der Merwe, R., Doucet, A., de Freitas, J. F. G. and Wan, E. (2000). The unscented particle filter, Technical Report CUED/F-INFENG/TR 380, Cambridge University Engineering Department.
2000
145
1,804
Algorithms for Non-negative Matrix Factorization Daniel D. Lee* *BelJ Laboratories Lucent Technologies Murray Hill, NJ 07974 H. Sebastian Seung*t tDept. of Brain and Cog. Sci. Massachusetts Institute of Technology Cambridge, MA 02138 Abstract Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the ExpectationMaximization algorithm. The algorithms can also be interpreted as diagonally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence. 1 Introduction Unsupervised learning algorithms such as principal components analysis and vector quantization can be understood as factorizing a data matrix subject to different constraints. Depending upon the constraints utilized, the resulting factors can be shown to have very different representational properties. Principal components analysis enforces only a weak orthogonality constraint, resulting in a very distributed representation that uses cancellations to generate variability [1, 2]. On the other hand, vector quantization uses a hard winnertake-all constraint that results in clustering the data into mutually exclusive prototypes [3]. We have previously shown that nonnegativity is a useful constraint for matrix factorization that can learn a parts representation of the data [4, 5]. The nonnegative basis vectors that are learned are used in distributed, yet still sparse combinations to generate expressiveness in the reconstructions [6, 7]. In this submission, we analyze in detail two numerical algorithms for learning the optimal nonnegative factors from data. 2 Non-negative matrix factorization We formally consider algorithms for solving the following problem: Non-negative matrix factorization (NMF) Given a non-negative matrix V, find non-negative matrix factors Wand H such that: V~WH (1) NMF can be applied to the statistical analysis of multivariate data in the following manner. Given a set of of multivariate n-dimensional data vectors, the vectors are placed in the columns of an n x m matrix V where m is the number of examples in the data set. This matrix is then approximately factorized into an n x r matrix Wand an r x m matrix H. Usually r is chosen to be smaller than nor m, so that Wand H are smaller than the original matrix V. This results in a compressed version of the original data matrix. What is the significance of the approximation in Eq. (1)? It can be rewritten column by column as v ~ Wh, where v and h are the corresponding columns of V and H. In other words, each data vector v is approximated by a linear combination of the columns of W, weighted by the components of h. Therefore W can be regarded as containing a basis that is optimized for the linear approximation of the data in V. Since relatively few basis vectors are used to represent many data vectors, good approximation can only be achieved if the basis vectors discover structure that is latent in the data. The present submission is not about applications of NMF, but focuses instead on the technical aspects of finding non-negative matrix factorizations. Of course, other types of matrix factorizations have been extensively studied in numerical linear algebra, but the nonnegativity constraint makes much of this previous work inapplicable to the present case [8]. Here we discuss two algorithms for NMF based on iterative updates of Wand H. Because these algorithms are easy to implement and their convergence properties are guaranteed, we have found them very useful in practical applications. Other algorithms may possibly be more efficient in overall computation time, but are more difficult to implement and may not generalize to different cost functions. Algorithms similar to ours where only one of the factors is adapted have previously been used for the deconvolution of emission tomography and astronomical images [9, 10, 11, 12]. At each iteration of our algorithms, the new value of W or H is found by multiplying the current value by some factor that depends on the quality ofthe approximation in Eq. (1). We prove that the quality of the approximation improves monotonically with the application of these multiplicative update rules. In practice, this means that repeated iteration of the update rules is guaranteed to converge to a locally optimal matrix factorization. 3 Cost functions To find an approximate factorization V ~ W H, we first need to define cost functions that quantify the quality of the approximation. Such a cost function can be constructed using some measure of distance between two non-negative matrices A and B . One useful measure is simply the square of the Euclidean distance between A and B [13], IIA - BI12 = L(Aij - Bij)2 ij This is lower bounded by zero, and clearly vanishes if and only if A = B . Another useful measure is ( k· ) D(AIIB) = 2: Aij log B:~ - Aij + Bij "J (2) (3) Like the Euclidean distance this is also lower bounded by zero, and vanishes if and only if A = B . But it cannot be called a "distance", because it is not symmetric in A and B, so we will refer to it as the "divergence" of A from B. It reduces to the Kullback-Leibler divergence, or relative entropy, when 2:ij Aij = 2:ij Bij = 1, so that A and B can be regarded as normalized probability distributions. We now consider two alternative formulations of NMF as optimization problems: Problem 1 Minimize IIV - W HI12 with respect to Wand H, subject to the constraints W,H~O. Problem 2 Minimize D(VIIW H) with re.lpect to Wand H, subject to the constraints W,H~O. Although the functions IIV - W HI12 and D(VIIW H) are convex in W only or H only, they are not convex in both variables together. Therefore it is unrealistic to expect an algorithm to solve Problems 1 and 2 in the sense of finding global minima. However, there are many techniques from numerical optimization that can be applied to find local minima. Gradient descent is perhaps the simplest technique to implement, but convergence can be slow. Other methods such as conjugate gradient have faster convergence, at least in the vicinity of local minima, but are more complicated to implement than gradient descent [8]. The convergence of gradient based methods also have the disadvantage of being very sensitive to the choice of step size, which can be very inconvenient for large applications. 4 Multiplicative update rules We have found that the following "multiplicative update rules" are a good compromise between speed and ease of implementation for solving Problems 1 and 2. Theorem 1 The Euclidean distance II V - W H II is non increasing under the update rules (WTV)att (V HT)ia Hal' +- Hal' (WTWH)att Wia +- Wia(WHHT)ia (4) The Euclidean distance is invariant under these updates if and only if Wand H are at a stationary point of the distance. Theorem 2 The divergence D(VIIW H) is nonincreasing under the update rules H H 2:i WiaVitt/(WH)itt 2:1' HattVitt/(WH)itt att +att " W Wia +- Wia " H (5) L..Jk ka L..Jv av The divergence is invariant under these updates if and only ifW and H are at a stationary point of the divergence. Proofs of these theorems are given in a later section. For now, we note that each update consists of multiplication by a factor. In particular, it is straightforward to see that this multiplicative factor is unity when V = W H, so that perfect reconstruction is necessarily a fixed point of the update rules. 5 Multiplicative versus additive update rules It is useful to contrast these multiplicative updates with those arising from gradient descent [14]. In particular, a simple additive update for H that reduces the squared distance can be written as (6) If 'flatt are all set equal to some small positive number, this is equivalent to conventional gradient descent. As long as this number is sufficiently small, the update should reduce IIV - WHII· Now if we diagonally rescale the variables and set Halt "Ialt = (WTW H)alt ' (7) then we obtain the update rule for H that is given in Theorem 1. Note that this rescaling results in a multiplicative factor with the positive component of the gradient in the denominator and the absolute value of the negative component in the numerator of the factor. For the divergence, diagonally rescaled gradient descent takes the form Halt f- Halt + "Ialt [~Wia (:;;)ilt - ~ Wia]. (8) Again, if the "Ialt are small and positive, this update should reduce D (V II W H). If we now set Halt "Ialt= ~ W. ' ui za (9) then we obtain the update rule for H that is given in Theorem 2. This rescaling can also be interpretated as a multiplicative rule with the positive component of the gradient in the denominator and negative component as the numerator of the multiplicative factor. Since our choices for "Ialt are not small, it may seem that there is no guarantee that such a rescaled gradient descent should cause the cost function to decrease. Surprisingly, this is indeed the case as shown in the next section. 6 Proofs of convergence To prove Theorems 1 and 2, we will make use of an auxiliary function similar to that used in the Expectation-Maximization algorithm [15, 16]. Definition 1 G(h, h') is an auxiliary functionfor F(h) if the conditions G(h, h') ~ F(h), G(h, h) = F(h) (10) are satisfied. The auxiliary function is a useful concept because of the following lemma, which is also graphically illustrated in Fig. 1. Lemma 1 IfG is an auxiliary junction, then F is nonincreasing under the update ht+1 = argmlnG(h,ht ) (11) Proof: F(ht+1) ~ G(ht+1, ht) ~ G(ht, ht) = F(ht) • Note that F(ht+1) = F(ht) only if ht is a local minimum of G(h, ht). If the derivatives of F exist and are continuous in a small neighborhood of ht , this also implies that the derivatives 'V F(ht) = O. Thus, by iterating the update in Eq. (11) we obtain a sequence of estimates that converge to a local minimum hmin = argminh F(h) of the objective function: We will show that by defining the appropriate auxiliary functions G(h, ht) for both IIV W HII and D(V, W H), the update rules in Theorems 1 and 2 easily follow from Eq. (11). Figure 1: Minimizing the auxiliary function G(h, ht) 2:: F(h) guarantees that F(ht+1) :::; F(ht) for hn+1 = argminh G(h, ht). Lemma 2 If K(ht) is the diagonal matrix Kab(ht) = <5ab(WTwht)a/h~ (13) then G(h, ht) = F(ht) + (h - ht)T\l F(ht) + ~(h - ht)T K(ht)(h - ht) (14) is an auxiliary function for F(h) = ~ ~)Vi - L W iaha)2 i a (15) Proof: Since G(h, h) = F(h) is obvious, we need only show that G(h, ht) 2:: F(h). To do this, we compare F(h) = F(ht) + (h - htf\l F(ht) + ~(h - ht)T(WTW)(h - ht) (16) 2 with Eq. (14) to find that G(h, ht) 2:: F(h) is equivalent to 0:::; (h - htf[K(ht) - WTW](h - ht) (17) To prove positive semidefiniteness, consider the matrix 1: (18) which is just a rescaling of the components of K - WTW. Then K - WTW is positive semidefinite if and only if M is, and VT M v = L VaMabVb ab (19) L h~(WTW)abh~v~ vah~(WTW)abh~Vb (20) ab "( T ) t t [1 2 1 2 ] L...J W W abhahb 2"va + 2"Vb - VaVb ab (21) = ~ L(WTW)abh~h~(va - Vb)2 (22) ab > 0 (23) 'One can also show that K - WTW is positive semidefinite by considering the matrix K (I1 T 1) 1 ./ (T ) 1 T K- 2 W W K- 2 K 2. Then v M W W ht a is a positive eigenvector of K- 2 W W Kwith unity eigenvalue, and application of the Frobenius-Perron theorem shows that Eq. 17 holds. • We can now demonstrate the convergence of Theorem 1: Proof of Theorem 1 Replacing G(h, ht) in Eq. (11) by Eq. (14) results in the update rule: ht+1 = ht - K(ht)-l\1F(ht) (24) Since Eq. (14) is an auxiliary function, F is nonincreasing under this update rule, according to Lemma 1. Writing the components of this equation explicitly, we obtain ht+1 = ht (WT V )a a a (WTWht)a . (25) By reversing the roles of Wand H in Lemma 1 and 2, F can similarly be shown to be nonincreasing under the update rules for W .• We now consider the following auxiliary function for the divergence cost function: Lemma 3 Define G(h,ht) ia " Wiah~ ( Wiah~ ) - ~ Vi,"", W - ht logWiaha -log,"", W - ht ia ub ,b b ub ,b b This is an auxiliary function for F(h) = L Vi log (~ ~_ h ) - Vi + LWiaha i a 'l,a a a (26) (27) (28) Proof: It is straightforward to verify that G(h, h) = F(h) . To show that G(h, ht) 2: F(h), we use convexity of the log function to derive the inequality " "Wiaha -log ~ Wiaha ::; - ~ Q a log -a a Q a which holds for all nonnegative Q a that sum to unity. Setting Wiah~ Q a = '"'" t ub Wibhb we obtain " " Wiah~ ( Wiah~ ) -log ~ Wiaha ::; - ~ '"'" W- ht log Wiaha - log,"", W- ht a a ub ,b b ub ,b b From this inequality it follows that F(h) ::; G(h, ht) .• Theorem 2 then follows from the application of Lemma 1: (29) (30) (31) Proof of Theorem 2: The minimum of G(h, ht) with respect to h is determined by setting the gradient to zero: _dG---,(,---,h,_h--,-t) __ " _ Wiah~ 1 "W- 0 ~v, t + ~ zadha _ ~b Wibhb ha , , (32) Thus, the update rule of Eq. (11) takes the form t+1 h~" Vi ha = '"'" w ~ '"'" W- ht W ia · ub kb i ub ,b b (33) Since G is an auxiliary function, F in Eq. (28) is nonincreasing under this update. Rewritten in matrix form, this is equivalent to the update rule in Eq. (5). By reversing the roles of Hand W, the update rule for W can similarly be shown to be nonincreasing .• 7 Discussion We have shown that application of the update rules in Eqs. (4) and (5) are guaranteed to find at least locally optimal solutions of Problems 1 and 2, respectively. The convergence proofs rely upon defining an appropriate auxiliary function. We are currently working to generalize these theorems to more complex constraints. The update rules themselves are extremely easy to implement computationally, and will hopefully be utilized by others for a wide variety of applications. We acknowledge the support of Bell Laboratories. We would also like to thank Carlos Brody, Ken Clarkson, Corinna Cortes, Roland Freund, Linda Kaufman, Yann Le Cun, Sam Roweis, Larry Saul, and Margaret Wright for helpful discussions. References [1] Jolliffe, IT (1986). Principal Component Analysis. New York: Springer-Verlag. [2] Turk, M & Pentland, A (1991). Eigenfaces for recognition. J. Cogn. Neurosci. 3, 71- 86. [3] Gersho, A & Gray, RM (1992). Vector Quantization and Signal Compression. Kluwer Acad. Press. [4] Lee, DD & Seung, HS. Unsupervised learning by convex and conic coding (1997). Proceedings of the Conference on Neural Information Processing Systems 9, 515- 521. [5] Lee, DD & Seung, HS (1999). Learning the parts of objects by non-negative matrix factorization. Nature 401, 788- 791. [6] Field, DJ (1994). What is the goal of sensory coding? Neural Comput. 6, 559-601. [7] Foldiak, P & Young, M (1995). Sparse coding in the primate cortex. The Handbook of Brain Theory and Neural Networks, 895- 898. (MIT Press, Cambridge, MA). [8] Press, WH, Teukolsky, SA, Vetterling, WT & Flannery, BP (1993). Numerical recipes: the art of scientific computing. (Cambridge University Press, Cambridge, England). [9] Shepp, LA & Vardi, Y (1982). Maximum likelihood reconstruction for emission tomography. IEEE Trans. MI-2, 113- 122. [10] Richardson, WH (1972). Bayesian-based iterative method of image restoration. 1. Opt. Soc. Am. 62, 55- 59. [11] Lucy, LB (1974). An iterative technique for the rectification of observed distributions. Astron. J. 74, 745- 754. [12] Bouman, CA & Sauer, K (1996). A unified approach to statistical tomography using coordinate descent optimization. IEEE Trans. Image Proc. 5, 480--492. [13] Paatero, P & Tapper, U (1997). Least squares formulation of robust non-negative factor analysis. Chemometr. Intell. Lab. 37, 23- 35. [14] Kivinen, J & Warmuth, M (1997). Additive versus exponentiated gradient updates for linear prediction. Journal of Tnformation and Computation 132, 1-64. [15] Dempster, AP, Laird, NM & Rubin, DB (1977). Maximum likelihood from incomplete data via the EM algorithm. J. Royal Stat. Soc. 39, 1-38. [16] Saul, L & Pereira, F (1997). Aggregate and mixed-order Markov models for statistical language processing. In C. Cardie and R. Weischedel (eds). Proceedings of the Second Conference on Empirical Methods in Natural Language Processing, 81- 89. ACL Press.
2000
146
1,805
Divisive and Subtractive Mask Effects: Linking Psychophysics and Biophysics Barbara Zenger Division of Biology Caltech 139-74 Pasadena, CA 91125 barbara@klab.caltech. edu Christof Koch Computation and Neural Systems Caltech 139-74 Pasadena, CA 91125 koch@klab.caltech.edu Abstract We describe an analogy between psychophysically measured effects in contrast masking, and the behavior of a simple integrate-andfire neuron that receives time-modulated inhibition. In the psychophysical experiments, we tested observers ability to discriminate contrasts of peripheral Gabor patches in the presence of collinear Gabor flankers. The data reveal a complex interaction pattern that we account for by assuming that flankers provide divisive inhibition to the target unit for low target contrasts, but provide subtractive inhibition to the target unit for higher target contrasts. A similar switch from divisive to subtractive inhibition is observed in an integrate-and-fire unit that receives inhibition modulated in time such that the cell spends part of the time in a high-inhibition state and part of the time in a low-inhibition state. The similarity between the effects suggests that one may cause the other. The biophysical model makes testable predictions for physiological single-cell recordings. 1 Psychophysics Visual images of Gabor patches are thought to excite a small and specific subset of neurons in the primary visual cortex and beyond. By measuring psychophysically in humans the contrast detection and discrimination thresholds of peripheral Gabor patches, one can estimate the sensitivity of this subset of neurons. Furthermore, spatial interactions between different neuronal populations can be probed by testing the effects of additional Gabor patches (masks) on performance. Such experiments have revealed a highly configuration-specific pattern of excitatory and inhibitory spatial interactions [1, 2]. 1.1 Methods Two vertical Gabor patches with a spatial frequency of 4cyc/deg were presented at 4 deg eccentricity left and right of fixation, and observers had to report which patch had the higher contrast (spatial 2AFC). In the "flanker condition" (see Fig. lA), the two targets were each flanked by two collinear Gabor patches of 40% contrast, presented above and below the targets (at a distance of 0.75 deg, i.e., 3 times the spatial period of the Gabor). Observers fixated a central cross, which was visible before and during each trial, and then initiated the trial by pressing the space bar on the computer keyboard. Two circular cues appeared for 180ms to indicate the locations of the two targets (to minimize spatial uncertainty). A blank stimulus of randomized length (500ms±100ms) was followed by a 83ms stimulus presentation. No mask was presented. Observers indicated which target had the higher contrast ("left" or "right") by specified keys. Auditory feedback was provided. Thresholds were determined using a staircase procedure [3]. Whenever the staircase procedure showed a ceiling effect (asking to display contrasts above 100%) the data for this pedestal contrast in this condition were not considered for this observer, even if at other days valid threshold estimates were obtained, because considering only the 'good days' would have introduced a bias. Seven observers with normal or corrected-to-normal vision participated in the experiment. Each condition was repeated at least six times. The experimental procedure is in accordance with Caltech's Committee for the Protection of Human Subjects. Experiments were controlled by an 02 Silicon Graphics workstation, and stimuli were displayed on a raster monitor. Mean luminance Lm was set to 40 cd/m2. We used color-bit stealing to increase the number of grey levels that can be displayed [4]. A gamma correction ensured linearity of the gray levels. To remove some of the effects of inter-observer variability from our data analysis, the entire data set of each observer was first normalized by his or her average performance across all conditions, and only then averages and standard errors were computed. The mean standard errors across all conditions and contrast levels are presented as bars in Figs. IB and ID. 1.2 Results In the absence of flankers (circles, Fig. IB), discrimination thresholds first decrease from absolute detection threshold at 8.7% with increasing pedestal contrast and then increase again. As common in sensory psychophysics, we assume that the contrast discrimination thresholds can be derived from an underlying sigmoidal contrastresponse function r(c) (see Fig. lC, solid curve), together with the assumption that some fixed response difference ~r = 1 is required for correct discrimination [2]. In other words, for any fixed pedestal contrast c, the discrimination threshold ~c satisfies r(c + ~c) = r(c) + 1. Our underlying assumption is that at the decision stage, the level of noise in the signal is independent of the response magnitude. Neuronal noise, on the other hand, is usually well characterized by a Poisson process, that is, the noise level increases with increasing response. Little evidence exists, however, that this "early" response dependent noise actually limits the performance. It is conceivable that this early noise is relatively small, that the performance-limiting noise is added at a later processing stage, and that this noise is independent of the response magnitude. To describe the response r of the system to a single, well-isolated, target as a function of its contrast c we adopt the function suggested by Foley (1994) [2]: acP risolated(C) = r!,-q (1) cP- Q + th For plausible parameters (c, Cth > 0) this function is proportional to cP for c« Cth A C 10 no flanks 8 40% flanks (]) Ul 6 c: 0 0.. 4 Ul (]) a: 2 25% 50% 75% 100 Contrast c..> <I "0 :g 18% Ul 2! .c ~ 1 2% 0 ii c: E 8% .;:: c..> Ul 0 B G-<) no flanks I SEM 0% 1% 4% 16% Pedestal contrast c D G-<) 20% flanks .. ~ 40% flanks 60% SEM 0% 1% 4% 16% 60% Pedestal contrast c Figure 1: (A) Sample stimuli without flanks and with flanks. (B) Discrimination thresholds average across seven observers for flanked (diamond) and unflanked (circles) targets. (C) Contrast response functions used for model prediction in (B). (D) Discrimination performance averaged across four observers for different flank contrasts. Lines in (B) and (D) represent the best model fit. and is proportional to cq for c » Cth, consistent with a modified Weber law [5]. The contrast-response function obtained for the parameters given in the first row of Tab. 1 is shown in Fig. lC (solid line). The corresponding discrimination thresholds (Fig. 1Bi solid line) fit well the psychophysical data (open circles). What happens to the dipper function when the two targets are flanked by Gabor patches of 40% contrast? In the presence of flankers, contrast discrimination thresholds (diamonds, Fig. IB) first decrease, then increase, then decrease again, and finally increase again, following a W-shaped function. Depending on target contrast, one can distinguish two distinctive flanker effects: for targets of 40% contrast or less, flankers impair discrimination. In the masking literature such suppressive effects are often attributed to a divisive input from the mask to the target; in other words, the flanks seem to reduce the target's gain [2]. For targets of 50% or more (four rightmost data points in Fig. IB), contrast performance is about the same irrespective of whether flankers are present or not; at these high target contrasts, flankers apparently cease to contribute to the target's gain control. Following this concept, we define two model parameters to describe the effects of the flankers: the first parameter, Co, determines the maximal target contrast at which gain control is still effective; the second parameter, b, determines the strength of the gain control. Formally written, we obtain: ( ) _ {riSolated (C) I b for C :::; Co, (gain control) rflanked C risolated (c) - d for C ~ Co, (no gain control) (2) In the low-contrast range, the contrast-response functions with and without flankers are multiples of each other (factor b); in the high-contrast regime, the two curves are shifted vertically (offset d) with respect to each other (see Fig. 1C). The subtractive constant d is not a free parameter, but is determined by imposing that r be continuous at C = Co, i.e., risolated (co) Ib = risolated (co) - d. The parameters that best account for the average data in Figs. 1B and 1D in the least-mean-square sense were estimated using a multidimensional simplex algorithm [6]. Table 1: Best fitting model parameters in the least-square sense. no flanks 20% flanks 40% flanks 70% flanks a Cth p q b Co b Co b Co Fig. lEC 0.363 7.14% 4.47 0.704 1.86 46.8% Fig. 1D 0.395 6.07% 3.78 0.704 1.69 26.4% 1.78 44.9% 2.01 64.3% Increasing the flanker contrast leads both to an increase in the strength of gain control b and to an increase in the range Co in which gain control is effective. The predicted discrimination performance is shown superimposed on the data in Fig. 1B and D. As one can see, the model captures the behavior of the data reasonably well, considering that for each combined fit there are only four parameters to fit the unflanked data and two additional parameters for each W curve. Or, put differently, we use but two degrees of freedom to go from the unflanked to the flanked conditions. 2 Biophysics While the above model explains the data, it remains a puzzle how the switch from divisive to subtractive is implemented neuronally. Here, we show that time-modulated inhibition can naturally account for the observed switch, without assuming inputdependent changes in the network. 2.1 Circuit Model II I II 9 , Figure 2: Circuit model used for the simulations. To simulate the behavior of individual neurons we use a variant of the leaky integrate-and-fire unit (battery Ee = 70mV, capacitance C = 200pF, leak conductance 9pass = IOnS, and firing threshold vth = 20mV, see Fig. 2). Excitatory and inhibitory synaptic input are modeled as changes in the conductances ge and gi, respectively. Whenever the membrane potential Vm exceeds threshold (Vih), a spike is initiated and the membrane potential Vm is reset to v;.est = O. No refractory period was assumed. The model was implemented on a PC using the programming language C. 2.2 Simulations Firing rates for increasing excitation (ge) at various levels of inhibtion (gi) are shown in Fig. 3A. For low excitatory input the cell never fires, because the input current is counter-balanced by the leakage current, thus preventing the cell from reaching its firing threshold. Once the cell does start firing, firing rates first increase very fast, but then rapidly converge against a linear function, whose slope is independent of gi. When the inhibitory input is modulated in time and switches between a low A B N JC4oor---------------------~400r---------------------~ c ~300 c (\) ~200 0(\) ..... u.. 100 (\) .::t:. -9j=OnS -9j = 10nS '0.. 0 "--__ ........ __ .L-~....l..._~ ____ ~ __ _' C/) 0 5 10 15 20 ge in nS 100 20ns, 9j OnS rrnn m 0 DDm Oms 100ms 200ms -9j =OnS 5 10 15 ge in nS 20 Figure 3: Simulations of circuit model with constant inhibition (A) or timemdoulated inhbition (dashed line in (B)). This simple single-cell model matches the psychophysics remarkably well. inhibition state (gi = glow) and a high inhibition state (gi = ghigh), the results look different (Fig. 3B, dashed line). The cell fires part of the time like a lowly inhibited cell, part of the time like a highly inhibited cell, explaining why the overall firing rate resemble weighted averages of the curves for constant gi. A comparison of the noinhibition curve (gj = 0) and the curve for time-modulated inhbition demonstrates that inhibition switches from a divisive mode to a subtractive mode for increasing ge. The ge-Ievel at which the switch occurs depends on the level of inhibition in the high-inhibition state (here ghigh=20nS). The strength of divisive inhibition depends on the percentage of time R that the cell spends in the high-inhibition state; in the example shown as a dashed line in Fig. 2B, the cell spends on average half of the time in the high-inhibition stage (thus R=50%), and remains the rest of the time in the low-inhibition stage. 3 Discussion Both the psychophysical data and the biophysical model show a switch from divisive to subtractive inhibition. Making the connection between psychophysics and biophysics explicitly, requires that a number of assumptions be made: (1) the excitatory input ge to the target unit increases with increasing target contrast; (2) increasing the flank contrast leads to an increase of 9high (to account for the fact that the transition from divisive to subtractive inhibition occurs at higher contrasts co); (3) the relative time spent in the 9high state (R) increases with flanker contrast (leading to a stronger divisive inhibition b that is reflected in the overall performance decrease with increasing flanker contrast). While these assumptions all seem quite plausible, there remains the question of why one would assume time-modulated inhbition in the first place. Here we suggest three different mechanisms: First, the time-modulation might reflect inhibitory input by synchronized interneurons [7], i.e., sometimes a large number of them fire at the same time (high-inhibition state) while at other times almost none ofthe inhibitory cells fire (low-inhibition state). A second plausible implementation (which gives very similar results) assumes that there is only one transition and that the low- and high-inhibition state follow each other sequentially (rather than flipping back and forth as suggested in Fig. 3B). Indeed, cells in primary visual cortex often show a transient response at stimulus onset (which may reflect the low-inhibition state), followed by a smaller level of sustained response (which may reflect the high-inhibition state). In this context, R would simply reflect the time delay between the onset of excitation and inhibition (with a large R representing brief delays before inhibition sets in). Finally, low- and high- inhibition states may reflect different subtypes of neurons which receive different amount of surround inhibition. In other words, some neurons are strongly inhibited (high-inhibition state) while others are not (low-inhibition state). The ratio of strongly-inhibited units (among all units) is given by R. The mean response of all the neurons will show a divisive inhibition in the range where the inhibited neurons are shut off completely, but will show a subtractive inhibition as soon as the inhibited units start firing. To summarize on a more abstract level: any mechanism that will average firing rates of different 9i states, rather than averaging different inhibitory inputs 9i, will lead to a mechanism that shows this switch from divisive to subtractive inhibition. The remaining differences between the psychophysically estimated contrast-response functions (Fig. Ie) and the firing rates of the circuit model (Fig. 3B) seem to reflect mainly oversimplifications in the biophysical model. Saturation at high ge values, for instance, could be achieved by assuming refractory periods or other firing-rate adaptation mechanisms. The very steep slope directly after the switch from divisive to subtractive inhibition would disappear if the simple integrate-andfire unit would be replaced by a more realistic unit in which due to stochastic linearization the firing rate rises more gradually once the threshold is crossed. In any case, one does not expect a precise match between the two functions, as psychophysical performance presumably relies on a variety of different neurons with different dynamic ranges. Once the model includes many neurons, one would need to define decision strategies. We believe that such a link between a biophysical model and psychophysical data is in principle possible, but have favored here simplicity at the expense of achieving a more quantitative match. Our analysis of the circuit model shows that the psychophysical data can be explained without assuming complex interaction patterns between different neuronal units. While we have no reason to believe that the switching-mechanism from divisive to subtractive inhibition will become ineffective when considering large number of neurons, it does not require a large network. Our model suggests that the critical events happen at the level of individual neurons, and not in the network. Our model makes two clear predictions: first, the contrast-response function of single neurons should show in the presence of flankers a switch from divisive to subtractive inhibition (Fig. lC and Fig. 3B). Physiological studies have measured how stimuli outside the classical receptive field affect the absolute response level of the target unit [8, 9]. Distinguishing subtractive and divisive inhibition, however, requires that, in addition, surround effects on the slope of the contrast-response functions are estimated. Such experiments have been carried out by Sengpiel et al [10] in cat primary visual cortex. Their extracellular recordings show that when a target grating is surrounded by a high-contrast annulus, inhibition is indeed well described by a divisive effect on the response. It remains to be seen, however, whether surround annuli whose contrast is lower than the target contrast will act subtractively. The second prediction is that inhibition is bistable, i.e., that there are distinct low- and high-inhibition states. These states may alternate in time within the same neuron, or they may be represented by different subsets of neurons. Acknow ledgIllents We would like to thank Jochen Braun, Gary Holt, and Laurent Itti for helpful comments. The research was supported by NSF, NIMH and a Caltech Divisional Scholarship to BZ. References [1] U. Polat and D. Sagi. Lateral interactions between spatial channels: Suppression and facilitation revealed by lateral masking experiments. Vision Research, 33:993- 999,1993. [2] John M. Foley. Human luminance pattern-vision mechanisms: masking experiments require a new model. Journal of the Optical Society of America A, 11:1710-1719,1994. [3] H. Levitt. Transformed up-down methods in psychoacoustics. The Journal of the Acoustical Society of America, 49:467- 477, 1971. [4] C.W. Tyler. Colour bit-stealing to enhance the luminance resolution of digital displays on a single pixel basis. Spatial Vision, 10(4):369-377, 1997. [5] Gordon E. Legge. A power law for contrast discrimination. Vision Research, 21:457- 467, 1981. [6] W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery. Numerical Recipes in C. Cambridge University Press, 1992. [7] W. Singer and C.M. Gray. Visual feature integration and the temporal correlation hypothesis. Annual review of neuroscience, 18:555-586, 1995. [8] J.B. Levitt and J.S. Lund. Contrast dependence of contextual effects in primate visual cortex. Nature, 387:73- 76, 1997. [9] U. Polat, K. Mizobe, M.W. Pettet, T. Kasamatsu, and A.M. Norcia. Collinear stimuli regulate visual responses depending on cell's contrast threshold. Nature, 391:580-584, 1998. [10] F. Sengpiel, R.J. Baddeley, T.C.B Freeman, R. Harrad, and C. Blakemore. Different mechanisms underlie three inhibitory phenomena in cat area 17. Vision Research, 38(14):2067-2080, 1998.
2000
147
1,806
Factored Semi-Tied Covariance Matrices M.J.F. Gales Cambridge University Engineering Department Trumpington Street, Cambridge. CB2 IPZ United Kingdom mjfg@eng.cam.ac.uk Abstract A new form of covariance modelling for Gaussian mixture models and hidden Markov models is presented. This is an extension to an efficient form of covariance modelling used in speech recognition, semi-tied covariance matrices. In the standard form of semi-tied covariance matrices the covariance matrix is decomposed into a highly shared decorrelating transform and a component-specific diagonal covariance matrix. The use of a factored decorrelating transform is presented in this paper. This factoring effectively increases the number of possible transforms without increasing the number of free parameters. Maximum likelihood estimation schemes for all the model parameters are presented including the component/transform assignment, transform and component parameters. This new model form is evaluated on a large vocabulary speech recognition task. It is shown that using this factored form of covariance modelling reduces the word error rate. 1 Introduction A standard problem in machine learning is to how to efficiently model correlations in multidimensional data. Solutions should be efficient both in terms of number of model parameters and cost of the likelihood calculation. For speech recognition this is particularly important due to the large number of Gaussian components used, typically in the tens of thousands, and the relatively large dimensionality of the data, typically 30-60. The following generative model has been used in speech recognition 1 X(T) W O(T) = F [ X~T) ] (1) (2) where X(T) is the underlying speech signal, F is the observation transformation matrix, W is generated by a hidden Markov model (HMM) with diagonal covariance matrix Gaussian IThis describes the static version of the generative model. The more general version is described by replacing equation 1 by x( T) = Cx( T - 1) + w. mixture model (GMM) to model each state2 and v is usually assumed to be generated by a GMM, which is common to all HMMs. This differs from the static linear Gaussian models presented in [7] in two important ways. First w is generated by either an HMM or GMM, rather than a simple Gaussian distribution. The second difference is that the "noise" is now restricted to the null space of the signal x (7). This type of system can be considered to have two streams. The first stream, the n1 dimensions associated with X(7), is the set of discriminating, useful, dimensions. The second stream, the n2 dimensions associated with v, is the set of non-discriminating, nuisance, dimensions. Linear discriminant analysis (LDA) and heteroscedastic LDA (HLDA) [5] are both based on this form of generative model. When the dimensionality of the nuisance dimensions is reduced to zero this generative model becomes equivalent to a semi-tied covariance matrix system [3] with a single, global, semi-tied class. This generative model has a clear advantage during recognition compared to the standard linear Gaussian models [2] in the reduction in the computational cost of the likelihood calculation. The likelihood for component m may be computed as3 ( ( ) . (m) ~(m) F) _ l(7) N((F-1) (). (m) ~(m)) po 7 ,IL , diag' Idet(F)I [1]07 , IL , diag (3) where lL(m) is the n1 -dimensional mean and ~~~lg the diagonal covariance matrix of Gaussian component m. l (7) is the nuisance dimension likelihood which is independent of the component being considered and only needs to be computed once for each time instance. The initial normalisation term is only required during recognition when multiple transforms are used. The dominant cost is a diagonal Gaussian computation for each component, O(n1) per component. In contrast a scheme such as factor analysis (a covariance modelling scheme from the linear Gaussian model in [7]) has a cost of O(ni) per component (assuming there are n1 factors). The disadvantage of this form of generative model is that there is no simple expectation-maximisation (EM) [1] scheme for estimating the model parameters. However, a simple iterative scheme is available [3]. For some tasks, such as speech recognition where there are many different "sounds" to be recognised, it is unlikely that a single transform is sufficient to well model the data. To reflect this there has been some work on using multiple feature-spaces [3, 2]. The standard approach for using multiple transforms is to assign each component, m, to a particular transform, F( Tm). To simplify the description of the new scheme only modifications to the semi-tied covariance matrix scheme, where the nuisance dimension is zero, are considered. The generative model is modified to be 0(7) = F(Tm )X(7), where Tm is the transform class associated with the generating component, m, at time instance 7. The assignment variable, Tm , may either be determined by an "expert", for example using phonetic context information, or it may be assigned in a maximum likelihood (ML) fashion [3]. Simply 2 Although it is not strictly necessary to use diagonal covariance matrices, tllese currently dominate applications in speech recognition. w could also be generated by a simple GMM. 3This paper uses the following convention: capital bold letters refer to matrices e.g. A, bold letters refer to vectors e.g. b, and scalars are not bold e.g. c. When referring to elements of a matrix or vector subscripts are used e.g. ai is tlle ith row of matrix A, aij is tlle element of row i column j of matrix A and bi is element i of vector b. Diagonal matrices are indicated by A diag. Where multiple streams are used tllis is indicated, for example, by A[s], this is a n. x n matrix (n is tlle dimensionality of tlle feature vector and n. is tlle size of stream 8). Where subsets of tlle diagonal matrices are specified tlle matrices are square, e.g. Adiag[s] is ns x ns square diagonal matrix. AT is tlle transpose of tlle matrix and det( A) is tlle determinant of the matrix. increasing the number of transforms increases the number of model parameters to be estimated, hence reducing the robustness of the estimates. There is a corresponding increase in the computational cost during recognition. In the limit there is a single transform per component, the standard full-covariance matrix case. The approach adopted in this paper is to factor the transform into multiple streams. Each component can then use a different transform for each stream. Hence instead of using an assignment variable an assignment vector is used. In order to maintain the efficient likelihood computation of equation 3, F(r)-l, rather than F(r), must be factored into rows. This is a partitioning of the feature space into a set of observation streams. In common with other factoring schemes this dramatically increases the effective number of transforms from which each component may select without increasing the number of transform parameters. Though this paper only considers factoring semi-tied covariance matrices the extension to the "projection" schemes presented in [2] is straightforward. This paper describes how to estimate the set of transforms and determine which subspaces a particular component should use. The next section describes how to assign components to transforms and, given this assignment, how to estimate the appropriate transforms. Some initial experiments on a large vocabulary speech recognition task are presented in the following section. 2 Factored Semi-Tied Covariance Matrices In order to factor semi-tied covariance matrices the inverse of the observation transformation for a component is broken into multiple streams. The feature space of each stream is then determined by selecting from an inventory of possible transforms. Consider the case where there are S streams. The effective full covariance matrix of component m, ~(m), may be written as ~(m) = F(z(~)) ~(':') F(Z(~))T where the form of F(z(~)) is restricted dlag , so that4 (4) and z(m) is the S-dimensional assignment vector for component m. The complete set of model parameters, M, consists of the standard model parameters, the component means, variances, weights and, additionally, the set of transforms { Af~l ' ... , Af~')} for each stream s (Rs is the number of transforms associated with stream s) and the assignment vector z(m) for each component. Note that the semi-tied covariance matrix scheme is the case when S = 1. The likelihood is efficiently estimated by storing transformed observations for each stream transform, i.e. Af;! O(T). The model parameters are estimated using ML training on a labelled set of training data o = {0(1), . .. , o(T)}. The likelihood of the training data may be written as p(OIM) = LIT (P(q(T)lq(T -1)) L w(m)p(O(T);IL(m),~g;lg'A(Z(~)))) (5) E> r mE(}(r) 4A similar factorisation has also been proposed in [4]. where e is the set of all valid state sequences according to the transcription for the data, q(T) is the state at time T of the current path, O(T) is the set of Gaussian components belonging to state q(T), and w(m) is the prior of componentm. Directly optimising equation 5 is a very large optimisation task, as there are typically millions of model parameters. Alternatively, as is common with standard HMM training, an EM-based approach is used. The posterior probability of a particular component, m, generating the observation at a given time instance is denoted as 'Ym ( T). This may be simply found using the forward backward algorithm [6] and the old set of model parameters M. The new set of model parameters will be denoted as M. The estimation of the component priors and HMM transition matrices are estimated in the standard fashion [6]. Directly optimising the auxiliary function for the model parameters is computationally expensive [3] and does not allow the embedding of the assignment process. Instead a simple iterative optimisation scheme is used as follows: 1. Estimate the within class covariance matrix for each Gaussian component in the system, W(m), using the values of 'Ym (T). Initialise the set of assignment vectors, {z} = {Z(1), ... , Z(M)} and the set of transforms for each stream {A} = {A (1) A(Rt) A(1) A(RS)} [1)"'" [1) , ... , [8)"'" [8) . 2. Using the current estimates of the transforms and assignment vectors obtain the ML estimate of the set of component specific diagonal covariance matrices incorporating the appropriate parameter tying as required. This set of parameters will be denoted as {t} = {~~~g"'" ~~~}. 3. Estimate the new set of transforms, { A }, using the current set of component covariance matrices { t } and assignment vectors { Z }. The new auxiliary function at this stage will be written as Q(M, M; { t } , { z} ). 4. Update the set of assignment variables for each component { Z }, given the current set of model transforms, { A } . 5. Goto (2) until convergence, or an appropriate stopping criterion is satisfied. Otherwise update {t} and the component means using the latest transforms and assignment variables. There are three distinct optimisation problems within this task. First the ML estimate of the set of component specific diagonal covariance matrices is required. Second, the new set of transforms must be estimated. Finally the new set of assignment vectors is required. The ML estimates of the component specific variances (and means) under a transformation is a standard problem, e.g. for the semi-tied case see [3] and is not described further. The ML estimation of the transforms and assignment variables are described below. The transforms are estimated in an iterative fashion. The proposed scheme is derived by modifying the standard semi-tied covariance optimisation equation in [3]. A row by row optimisation is used. Consider row i of stream p of transform r, a[;fi' the auxiliary function may be written as (ignoring constant scalings and elements independent of a[;fi) Q(M M' {t} {z}) = "" (3(m) log ((c(z(m»a(Z~~»T)2) _ "" a(r).K(srj)a(r)T ", L...J [pj. [pj. L...J [sj} [sj} m 8,r,j w(m) K(srj ) = L (m)2 L 'Ym(r) m:{z~m)=r} U diag[sjj T (6) (z(m» (z(m» (r) and c[pji is the cofactor of row i of stream p of transform A . The gradient j [pji' differentiating the auxiliary function with respect to a[;fi' is given by5 { (m)c(z~m»} j(r). = "" 2 (3 [pj. _ 2a(r).K(pri) [pj. L...J (z(m» (r)T [pj. m:{z~m)=r} C[pji a[pji (8) The main cost for computing the gradient is calculating the cofactors for each component. Having computed the gradient the Hessian may also be simply calculated as { (m) (z(m»T (z(m»} H(r). = "" _2(3 c [pji c[pji _ 2K(pri) [pj. L...J ( (z(m» (r)T)2 m:{z~m)=r} c[pji a[pji (9) The Hessian is guaranteed to be negative definite so the Newton direction must head towards a maximum. At the t + 1 th iteration (r) ( 1) _ (r) () j(r) H(r)-l a[pji t + - a[pji t [pji [Pji (10) where the gradient and Hessian are based on the tth parameter estimates. In practice this estimation scheme was highly stable. The assignment for stream s of component m is found using a greedy search technique based on ML estimation. Stream s of component m is assigned using { ( Idet (A (u(,rm») 12 ) } z(m) - arg max s rER, Idet ( diag (A[;i W(m) A[;t) ) I (11) where the hypothesised assignment of factor stream s, u(srm), is given by (srm) _ { r, j = s uj z~m), (otherwise) (12) ------------------------5When the standard semi-tied system is used (i.e. S = 1) the estimation of row, i has the closed form solution (r) _ (r) K(lri)-l (Lm:{zim)=r} f3(m)) a[l]i C[l]i (r) K(lri)-l (r)T C[l]i C[l]i (7) As the assignment is dependent on the cofactors, which themselves are dependent on the other stream assignments for that component, an iterative scheme is required. In practice this was found to converge rapidly. 3 Results and Discussion An initial investigation of the use of factored semi-tied covariance matrices was carried out on a large-vocabulary speaker-independent continuous-speech recognition task. The recognition experiments were performed on the 1994 ARPA Hub 1 data (the HI task), an unlimited vocabulary task. The results were averaged over the development and evaluation data. Note that no tuning on the "development" data was performed. The baseline system used for the recognition task was a gender-independent cross-word-triphone mixtureGaussian tied-state HMM system. For details of the system see [8]. The total number of phones (counting silence as a separate phone) was 46, from which 6399 distinct context states were formed. The speech was parameterised into a 39-dimensional feature vector. The set of baseline experiments with semi-tied covariance matrices (8 = 1) used "expert" knowledge to determine the transform classes. Two sets were used. The first was based on phone level transforms where all components of all states from the same phone shared the same class (phone classes). The second used an individual transform per state (state classes). In addition a global transform (global class) and a full-covariance matrix system (comp class) were tested. Two systems were examined, a four Gaussian components per state system and a twelve Gaussian component system. The twelve component system is the standard system described in [8]. In both cases a diagonal covariance matrix system (labelled none) was generated in the standard HTK fashion [9]. These systems were then used to generate the initial alignments to build the semi-tied systems. An additional iteration of Baum-We1ch estimation was then performed. Three forms of assignment training were compared. The previously described expert system and two ML-based schemes, standard andfactored. The standard scheme used a single stream (8 = 1) which is similar to the scheme described in [3]. The factored scheme used the new approach described in this paper with a separate stream for each of the elements of the feature vector (8 = 39). Table 1: System performance on the 1994 ARPA HI task none global phone state comp phone phone Assignment Scheme expert expert standard factored 10.34 8.87 10.04 8.86 9.20 8.84 9.22 9.98 9.73 8.62 9.48 8.42 The results of the baseline semi-tied covariance matrix systems are shown in table 1. For the four component system the full covariance matrix system achieved approximately the same performance as that of the expert state semi-tied system. Both systems significantly (at the 95% level) outperformed the standard 12-component system (9.71 %). The expert phone system shows around an 9% degradation in performance compared to the state system, but used less than a hundredth of the number of transforms (46 versus 6399). Using the standard ML assignment scheme with initial phone classes, S = 1, reduced the error rate of the phone system by around 3% over the expert system. The factored scheme, S = 39, achieved further reductions in error rate. A 5% reduction in word error rate was achieved over the expert system, which is significant at the 95% level. Table 1 also shows the performance of the twelve component system. The use of a global semi-tied transform significantly reduced the error rate by around 9% relative. Increasing the number of transforms using the expert assignment showed no reduction in error rate. Again using the phone level system and training the component transform assignments, either the standard or the factored schemes, reduced the word error rate. Using the factored semi-tied transforms (S = 39) significantly reduced the error rate, by around 5%, compared to the expert systems. 4 Conclusions This paper has presented a new form of semi-tied covariance, the factored semi-tied covariance matrix. The theory for estimating these transforms has been developed and implemented on a large vocabulary speech recognition task. On this task the use of these factored transforms was found to decrease the word error rate by around 5% over using a single transform, or multiple transforms, where the assignments are expertly determined. The improvement was significant at the 95% level. In future work the problems of determining the required number of transforms for each of the streams and how to determine the appropriate dimensions will be investigated. References [1] A P Dempster, N M Laird, and D B Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39:1-38, 1977. [2] M J F Gales. Maximum likelihood multiple projection schemes for hidden Markov models. Technical Report CUEDIF-INFENGffR365, Cambridge University, 1999. Available via anonymous ftp from: svr-ftp.eng.cam.ac.uk. [3] M J F Gales. Semi-tied covariance matrices for hidden Markov models. IEEE Transactions Speech and Audio Processing, 7:272-281, 1999. [4] N K Goel and R Gopinath. Multiple linear transforms. In Proceedings ICASSP, 2001. To appear. [5] N Kumar. Investigation of Silicon-Auditory Models and Generalization of Linear Discriminant Analysisfor Improved Speech Recognition. PhD thesis, John Hopkins University, 1997. [6] L R Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77, February 1989. [7] S Roweiss and Z Ghahramani. A unifying review of linear Gaussian models. Neural Computation, 11:305-345, 1999. [8] PC Woodland, J J Odell, V Valtchev, and S J Young. The development of the 1994 HTK large vocabulary speech recognition system. In Proceedings ARPA Workshop on Spoken Language Systems Technology, pages 104-109, 1995. [9] S J Young, J Jansen, J Odell, D Ollason, and P Woodland. The HTK Book (for HTK Version 2.0). Cambridge University, 1996.
2000
148
1,807
Noise suppression based on neurophysiologically-motivated SNR estimation for robust speech recognition J iirgen Tcharz Medical Physics Group Oldenburg University 26111 Oldenburg Germany tch@medi.physik.uni-oldenburg.de Michael Kleinschmidt Medical Physics Group Oldenburg University 26111 Oldenburg Germany Birger Kallmeier Medical Physics Group Oldenburg University 26111 Oldenburg Germany Abstract A novel noise suppression scheme for speech signals is proposed which is based on a neurophysiologically-motivated estimation of the local signal-to-noise ratio (SNR) in different frequency channels. For SNR-estimation, the input signal is transformed into so-called Amplitude Modulation Spectrograms (AMS), which represent both spectral and temporal characteristics of the respective analysis frame, and which imitate the representation of modulation frequencies in higher stages of the mammalian auditory system. A neural network is used to analyse AMS patterns generated from noisy speech and estimates the local SNR. Noise suppression is achieved by attenuating frequency channels according to their SNR. The noise suppression algorithm is evaluated in speakerindependent digit recognition experiments and compared to noise suppression by Spectral Subtraction. 1 Introduction One of the major problems in automatic speech recognition (ASR) systems is their lack of robustness in noise, which severely degrades their usefulness in many practical applications. Several proposals have been made to increase the robustness of ASR systems, e.g. by model compensation or more noise-robust feature extraction [1, 2]. Another method to increase robustness of ASR systems is to suppress the background noise before feature extraction. Classical approaches for single-channel noise suppression are Spectral Subtraction [3] and related schemes, e.g. [4], where the noise spectrum is usually measured in detected speech pauses and subtracted from the signal. In these approaches, stationarity of the noise has to be assumed while speech is active. Furthermore, portions detected as speech pauses must not contain any speech in order to allow for correct noise measurement. At the same time, all actual speech pauses should be detected for a fast update of the noise measurement. In reality, however, these partially conflicting requirements are often not met. The noise suppression algorithm outlined in this work directly estimates the local SNR in a range of frequency channels even if speech and noise are present at the same time, i.e., no explicit detection of speech pauses and no assumptions on noise stationarity during speech activity are necessary. For SNR estimation, the input signal is transformed into spectro-temporal input features, which are neurophysiologically-motivated: experiments on amplitude modulation processing in higher stages of the auditory system in mammals show that modulations are represented in "periodotopical" gradients, which are almost orthogonal to the tonotopical organization of center frequencies [5]. Thus, both spectral and temporal information is represented in two-dimensional maps. These findings were applied to signal processing in a binaural noise suppression system [6] with the introduction of so-called Amplitude Modulation Spectrograms (AMS) , which contain information on both center frequencies and modulation frequencies. In the present study, the different representations of speech and noise in AMS patterns are detected by a neural network, which estimates the local SNR in each frequency channel. For noise suppression, the frequency bands are attenuated according to the estimated local SNR in the different frequency channels. The proposed noise suppression scheme is evaluated in isolated-digit recognition experiments. As recognizer, a combination of an auditory-based front end [2] and a locally-recurrent neural network [7] is used. This combination was found to allow for more robust isolated-digit recognition rates, compared to a standard recognizer with mel-cepstral features and HMM modeling [8, 9]. Thus, the recognition experiments in this study were conducted with this particular combination to evaluate whether a further increase of robustness can be achieved with additional noise suppression. 2 The recognition system 2.1 Noise suppression Figure 1 shows the processing steps which are performed for noise suppression. To generate AMS patterns which are used for SNR estimation, the input signal (16 kHz sampling rate) is short-term level adjusted, i.e., each 32 ms segment which is later transformed into an AMS pattern is scaled to the same root-mean-square value. The level-adjusted signal is then subdivided into overlapping segments of 4.0 ms duration with a progression of 0.25 ms for each new segment. Each segment is multiplied by a Hanning window and padded with zeros to obtain a frame of 128 samples which is transformed with a FFT into a complex spectrum, with a spectral resolution of 125 Hz. The resulting 64 complex samples are considered as a function of time, i.e., as a band pass filtered complex time signal. Their respective envelopes are extracted by squaring. This envelope signal is again segmented into overlapping segments of 128 samples (32ms) with an overlap of 64 samples. Each segment is multiplied with a Hanning window and padded with zeros to obtain a frame of 256 samples. A further FFT is computed and supplies a modulation spectrum in each frequency channel, with a modulation frequency resolution of 15.6 Hz. By an appropriate summation of neighbouring FFT bins the frequency axis is transformed to a Bark scale with 15 channels, with center frequencies from 100-7300 Hz. The modulation scaled input signal input signal -+- ---c=J level normalization [ bandpass time signals ~ftl;l ov-add LJ analysis envelope ft~ ~U [ modulation spectrogram FFI ftl:! ~ LJ ------' output signal -+rescale, logamplitude Figure 1: Processing stages of AMS-based noise suppression. frequency spectrum is scaled logarithmically by appropriate summation, which is motivated by psychoacoustical findings about the shape of auditory modulation filters [10). The modulation frequency spectrum is restricted to the range between 50-400 Hz and has a resolution of 15 channels. Thus, the fundamental frequency of typical voiced speech is represented in the modulation spectrum. The AMS representation is restricted to a 15 times 15 pattern to limit the amount of training data which is necessary to train the fully connected perceptron. In a last processing step, the amplitude range is log-compressed. Examples for AMS patterns can be seen in Fig. 2. The AMS pattern on the left side was generated from a voiced speech portion. The periodicity at the fundamental frequency (approx. 110 Hz) is represented in each center frequency band. The AMS pattern on the right side was generated from speech simulating noise. The typical spectral tilt can be seen, but there is no structure across modulation frequencies. For classifying AMS patterns and estimating the narrow-band SNR of each AMS pattern, a feed-forward neural network is employed. The net consists of 225 input neurons (15*15, the AMS resolution of center frequencies and modulation frequencies, respectively), a hidden layer with 160 neurons, and an output layer with 15 neurons. The activity of each output neuron indicates the SNR in one of the 15 center frequency channels. For training, the narrow-band SNRs in 15 channels were measured for each AMS analysis frame of the training material prior to adding speech and noise. The neural network was trained with AMS patterns generated from 72 min of noisy speech from 400 talkers and 41 natural noise types, using the momentum backpropagation algorithm. After training, AMS patterns generated from "unknown" sound material are presented to the network. The 15 output neuron activities that appear for each pattern serve as SNR estimates for the respective frequency channels. In a detailed study on AMS-based broad-band SNR estimation [11) it was shown that harmonicity which is well represented in AMS patterns is the most important cue for the neural network to distinguish between speech and noise. However, harmonicity is not the only cue, as the algorithm allows for reliable discrimination between unvoiced speech and noise. The accuracy of SNR 55 73 100 135 192 246 333 Modulation Frequency [Hz] 55 73 100 135 192 246 333 Modulation Frequency [Hz] Figure 2: AMS patterns generated from a voiced speech segment (left), and from speech simulating noise (right). Each AMS pattern represents a 32 ms portion of the input signal. Bright and dark areas indicate high and low energies, respectively. estimation in terms of mean deviation between the actual and the estimated SNR in each frame, for each frequency channel, was determined with "unknown" test data (36 min of noisy speech). The average deviation across all frequency channels was 5.4 dB, with a decrease of accuracy towards higher frequency channels. Sub-band SNR estimates are utilized for noise suppression by attenuating frequency channels according to their local SNR. The gain function which was applied is given by Uk = (SNRk / (SNRk + 1))X , where k denotes the frequency channel, SNR the signalto-noise ratio on a linear scale, and x is an exponent which controls the strength of the attenuation, and which was set to 1.5 for the experiments described below. Noise suppression based on AMS-derived SNR estimation is performed in the FFTdomain. The input signal is segmented into overlapping frames with a window length of 32 ms, and a shift of 16 ms is applied, i.e., each window corresponds to one AMS analysis frame. The FFT is computed in every window. The magnitude in each frequency bin is multiplied by the corresponding gain computed from the AMS-based SNR estimation. The gain in frequency bins which are not covered by the center frequencies from the SNR estimation is linearly interpolated from neighboring estimation frequencies. The phase of the input signal is unchanged and applied to the attenuated magnitude spectrum. An inverse FFT is computed, and the enhanced speech is attained by overlapping and adding. 2.2 Auditory-based ASR feature extraction The front end which is used in the recognition system is based on a quantitative model of the "effective" peripheral auditory processing. The model simulates both spectral and temporal properties of sound processing in the auditory system which were found in psychoacoustical and physiological experiments. The model was originally developed for describing human performance in typical psychoacoustical spectral and temporal masking experiments, e.g., predicting the thresholds in backward, simultaneous, and forward-masking experiments [12, 13]. The main processing stages of the auditory model are gammatone filtering, envelope extraction in each frequency channel, adaptive amplitude compression, and low pass filtering of the envelope in each band. The adaptive compression stage compresses steadystate portions of the input signal logarithmically. Changes like onsets or offsets, in contrast, are transformed linearly. A detailed description of the auditory-based front end is given in [2]. 2.3 Neural network recognizer For scoring of the input features, a locally recurrent neural network (LRNN) is employed with three layers of neurons (150 input, 289 hidden, and 10 output neurons). Hidden layer neurons have recurrent connections to their 24 nearest neighbours. The input matrix consists of 5 times the auditory model feature vector with 30 elements, glued together in order to allow the network to memorize a time sequence of input matrices. The network was trained using the Backpropagation-trough-time algorithm with 200 iterations (see [7] for a detailed description of the recognizer) . 3 Recognition experiments 3.1 Setup The speech material for training of the word models and scoring was taken from the ZIFKOM database of Deutsche Telekom AG. Each German digit was spoken once by 200 different speakers (100 males, 100 females). The recording sessions took place in soundproof booths or quiet offices. The speech material was sampled at 16 kHz. Three different types of noise were added to the speech material at different signal-to-noise ratios before feature extraction: a) white Gaussian noise, b) speechsimulating noise which is characterized by a long-term speech spectrum and amplitude modulations which reflect an uncorrelated superposition of 6 speakers, and c) background noise recorded in a printing room which strongly fluctuates in both amplitude and spectral shape. The background noises were added to the utterances with signal-to-noise ratios ranging from 20 to -10 dB. The word models were trained with features from 100 undisturbed and unprocessed utterances of each digit. Features for testing were calculated from another 100 utterances of each digit which were distorted by additive noise before preprocessing. The recognition rates were measured without noise suppression and with noise suppression as described in Section 2.1. For comparison, the recognition rates were measured with noise suppression based on Spectral Subtraction including residual noise reduction [3] before feature extraction. Two methods for noise estimation were applied. In the first method, speech pauses in the noisy signals were detected using Voice Activity Detection (VAD) [14]. The noise measure was updated in speech pauses using a low pass filter with a time constant of 40 ms. In the second method, the noise spectrum was measured in speech pauses which were detected from the clean utterances using an energy criterion (thus, perfect speech pause information is provided, which is not available in real applications). 3.2 Results The speaker-independent isolated-digit recognition rates which were obtained in the experiments are plotted in Fig. 3 for three types of background noise as a function of the SNR. In all tested noises, noise suppression with the proposed algorithm increases the recognition rate in comparison with the unprocessed data and with Spectral Subtraction with VAD-based noise measurement. Spectral Subtraction with perfect speech pause detection allows for higher recognition rates than the AMS-based approach in stationary white noise. Here, the noise measure for Spectral Subtraction is very accurate during speech activity and allows for effective noise removal. AMS-based noise suppression estimates the SNR in every analysis frame, and no a priori information on speech-free segments is provided to the algorithm. In White noise 1 00 ~=,..,...--r-.---'----r---'----'----. ~ 90 80 ~ 70 . ~>!.:~~:-<~----" <: 60 :8 50 0 '2: noalgo --+-- ~. x * " Cl 40 AMS-based ____ , ___ . ,, -.. § 30 SS VAD . .. ,.... ". a: 20 SSJ)erf ··· 0 .... . . 1 0 L...L--H-.....L:::....L-----'-----''---'-----'---'----' clean 20 15 1 0 5 0 -5 -10 SNR [dB) Speech simulating noise 100 r---~ · .. ~ . F ... ~ .. ~_~.~'--.-r-, ~ 90 CD 80 ~ 70 <: 60 :8 50 '2: noalgo --+-Cl 40 AMS-based ____ , ___ . § 30 SS VAD ... ,. ... a: 20 SSJ)erfo 1 0 L...L--H-.....L::....L-----'-----''---'-----'---'----' clean 20 15 1 0 5 0 -5 -10 SNR [dB) Printing room noise 20 15 1 0 5 0 -5 -10 SNR [dB) Figure 3: Speaker-independent, isolated digit recognition rates for three types of noise as a function of the SNR without noise suppression (noalgo ), with AMS-based noise suppression, Spectral Subtraction with VAD-based noise measurement, and Spectral Subtraction with perfect speech pause information. speech simulation noise, which fluctuates in level but not in spectral shape, Spectral Subtraction with perfect speech pause detection works slightly better than AMSbased noise suppression. In printing room noise, which fluctuates in both level and spectrum, the AMS-based approach yields the best results. Here, Spectral Subtraction even degrades the recognition rates in some SNRs, compared to the unprocessed data. The noise measure from VAD-based or perfect speech pause detection cannot be updated while speech is active. Thus, an incorrect spectrum is subtracted and leads to artifacts and degraded recognition performance. In clean speech, recognition rates of 99.5% for unprocessed speech, 99.1% after Spectral Subtraction, and 98.9% after AMS-based noise suppression were obtained. 4 Discussion The proposed neurophysiologically-motivated noise suppression scheme was shown to significantly improve digit recognition in noise in comparison with unprocessed data and with Spectral Subtraction using VAD-based noise measures. A perfect speech pause detection (which is not available yet in real systems) allows for a reliable estimation of the noise floor in stationary noise. In non-stationary noise, however, the AMS pattern-based signal classification and noise suppression is advantageous, as it does not depend on speech pause detection and no assumption is necessary about the noise being stationary while speech is active. Spectral Subtraction as described in [3] produces musical tones, i.e. fast fluctuating spectral peaks. The neurophysiologically-based noise suppression scheme outlined in this paper does not produce such fast fluctuating artifacts. In general, a good quality of speech is maintained. The choice of the attenuation exponent x has only little impact on the quality of speech in favourable SNRs. With decreasing SNR, however, there is a tradeoff between the amount of noise suppression and distortions of the speech. A typical distortion of speech in poor signal-to-noise ratios is an unnatural spectral "coloring", rather than fast fluctuating distortions. In informal tests, most listeners did not have the impression that the algorithm improves speech intelligibility, but clearly preferred the processed signal over the unprocessed one, as the background noise was significantly suppressed without annoying artifacts. Clean speech is almost perfectly preserved after processing. The performance and characteristics of the algorithm of course strongly depends on the training data, as only lttle knowledge on the differences between speech and noise is "hard wired". Acknowledgments We thank Klaus Kasper and Herbert Reininger from Institut fUr Angewandte Physik, Universitat Frankfurt/M. for supplying us with their LRNN implementation. References [1] Hermansky, H. and Morgan, N. (1994). RASTA processing of speech. IEEE Trans. Speech Audio Processing 2(4), pp. 578-589 [2] Tchorz, J. and Kollmeier, B. (1999). A Model of Auditory Perception as Front End for Automatic Speech Recognition. J. Acoust. Soc. Am. 106, pp. 2040-2050 [3] Boll, S. (1979). Suppression of acoustic noise in speech using spectral subtraction. IEEE Trans. Acoust., Speech, Signal Processing 27(2), pp. 113- 120 [4] Ephraim, Y. and Malah, M. (1984). Speech enhancement using a minimum meansquare error short-time spectral amplitude estimator. IEEE Trans. Acoust., Speech, Signal Processing 32(6), pp. 1109-1121 [5] Langner, G., Sams, M., Heil, P., and Schulze, H., (1997). Frequency and periodicity are represented in orthogonal maps in the human auditory cortex: evidence from magnetoencephalography. J. Compo Physiol. A 181, pp. 665- 676 [6] Kollmeier, B. and Koch, R., (1994). Speech enhancement based on physiological and psycho acoustical models of modulation perception and binaural interaction. J. Acoust. Soc. Am. 95, pp. 1593- 1602 [7] Kasper, K., Reininger, H., Wolf, D., and Wiist, H. (1995). A speech recognizer with low complexity based on RNN. In: Neural Networks for Signal Processing V, Proc. of the IEEE workshop, Cambridge (MA), pp. 272- 281 [8] Kasper, K., Reininger, R., and Wolf, D. (1997). Exploiting the potential of auditory preprocessing for robust speech recognition by locally recurrent neural networks. Proc. Int. Conf. Acoustics, Speech and Signal Processing (ICASSP) 2, pp. 1223- 1227 [9] Kleinschmidt, M., Tchorz, J ., and Kollmeier, B. (2000). Combining speech enhancement and auditory feature extraction for robust speech recognition. Speech Communication, Special issue on robust ASR (accepted) [10] Ewert, S. and Dau, T . (1999) . Frequency selectivity in amplitude-modulation processing. J. Acoust. Soc. Am. (submitted) [11] Tchorz, J. and Kollmeier, B. (2000). Estimation of the signal-to-noise ratio with amplitude modulation spectrograms. Speech Communication (submitted) [12] Dau, T ., Piischel, D., and Kohlrausch, A. (1996) . A quantitative model of the "effective" signal processing in the auditory system: II. Simulations and measurements. J. Acoust. Soc. Am 99, pp. 3623- 3631 [13] Dau, T., Kollmeier, B., and Kohlrausch, A. (1997). Modeling auditory processing of amplitude modulation: I. Modulation Detection and masking with narrowband carriers. J. Acoust. Soc. Am 102, pp. 2892- 2905 [14] Recommendation ITU-T G.729 Annex B, 1996
2000
149
1,808
Learning Joint Statistical Models for Audio-Visual Fusion and Segregation John W. Fisher 111* Massachusetts Institute of Technology Cambridge, MA 02139 fisher@ai.mit.edu William T. Freeman Mitsubishi Electric Research Laboratory Cambridge, MA 02139 jreeman@merl.com Trevor Darrell Massachusetts Institute of Technology Cambridge, MA 02139 trevor@ai.mit.edu Paul Viola Massachusetts Institute of Technology Cambridge, MA 02139 viola@ai.mit.edu Abstract People can understand complex auditory and visual information, often using one to disambiguate the other. Automated analysis, even at a lowlevel, faces severe challenges, including the lack of accurate statistical models for the signals, and their high-dimensionality and varied sampling rates. Previous approaches [6] assumed simple parametric models for the joint distribution which, while tractable, cannot capture the complex signal relationships. We learn the joint distribution of the visual and auditory signals using a non-parametric approach. First, we project the data into a maximally informative, low-dimensional subspace, suitable for density estimation. We then model the complicated stochastic relationships between the signals using a nonparametric density estimator. These learned densities allow processing across signal modalities. We demonstrate, on synthetic and real signals, localization in video of the face that is speaking in audio, and, conversely, audio enhancement of a particular speaker selected from the video. 1 Introduction Multi-media signals pervade our environment. Humans face complex perception tasks in which ambiguous auditory and visual information must be combined in order to support accurate perception. By contrast, automated approaches for processing multi-media data sources lag far behind. Multi-media analysis (sometimes called sensor fusion) is often formulated in a maximum a posteriori (MAP) or maximum likelihood (ML) estimation framework. Simplifying assumptions about the joint measurement statistics are often made in order to yield tractable analytic forms. For example Hershey and Movellan have shown that correlations between video data and audio can be used to highlight regions of the image which are the "cause" of the audio signal. While such pragmatic choices may lead to *http://www.ai.mit.edulpeople/fisher simple statistical measures, they do so at the cost of modeling capacity. Furthermore, these assumptions may not be appropriate for fusing modalities such as video and audio. The joint statistics for these and many other mixed modal signals are not well understood and are not well-modeled by simple densities such as multi-variate exponential distributions. For example, face motions and speech sounds are related in very complex ways. A critical question is whether, in the absence of an adequate parametric model for joint measurement statistics, can one integrate measurements in a principled way without discounting statistical uncertainty. This suggests that a nonparametric statistical approach may be warranted. In the nonparametric statistical framework principles such as MAP and ML are equivalent to the information theoretic concepts of mutual information and entropy. Consequently we suggest an approach for learning maximally informative joint subspaces for multi-media signal analysis. The technique is a natural application of [8, 3, 5, 4] which formulates a learning approach by which the entropy, and by extension the mutual information, of a differentiable map may be optimized. By way of illustration we present results of audio/video analysis using the suggested approach on both simulated and real data. In the experiments we are able to show significant audio signal enhancement and video source localization. 2 Information Preserving Transformations Entropy is a useful statistical measure as it captures uncertainty in a general way. As the entropy of a density decreases so does the volume of the typical set [2]. Similarly, mutual information quantifies the information (uncertainty reduction) that two random variables convey about each other. The challenge of using such a measure for learning is that they are integral functions of densities (densities which must be inferred from samples). 2.1 Maximally Informative Subspaces In order to make the problem tractable we project high dimensional audio and video measurements to low dimensional subspaces. The parameters of the sub-space are not chosen in an ad hoc fashion, but are learned by maximizing the mutual information between the derived features. Specifically, let Vi '" V E ~Nv and ai '" A E ~Na be video and audio measurements, respectively, taken at time i. Let fv : ~Nv f-f ~Mv and f a : ~Na f-f ~Ma be mappings parameterized by the vectors etv and eta , respectively. In our experiments f v and f a are single-layer perceptrons and Mv = Ma = 1. The method extends to any differentiable mapping and output dimensionality [3]. During adaptation the parameters vectors etv and eta (the perceptron weights) are chosen such that (1) This process is ilustrated notionally in figure 1 in which video frames and sequences of periodogram coefficients are projected to scalar values. A clear advantage of learning a projection is that rather than requiring pixels of the video frames or spectral coefficients to be inspected individually the projection summarizes the entire set efficiently into two scalar values (one for video and one for audio). We have little reason to believe that joint audio/video measurements are accurately characterized by simple parametric models (e.g. exponential or uni-modal densities). Moreover, low dimensional projections which do not preserve this complex structure will not capture the true form of the relationship (i.e. random low dimensional projections of structured data are typically gaussian). The low dimensional projections which are learned by maximizing mutual information reduce the complexity of the joint distribution, but still preserve the important and potentially complex relationships between audio and visual signals. This learned subspace video projection video sequence audio sequence Figure 1: Fusion Figure: Projection to Subspace possibility motivates the methodology of [3, 8] in which the density in the joint subspace is modeled nonparametrically. This brings us to the natural question regarding the utitility of the learned supspace. There are a variety of ways the subspace and the associated joint density might be used to, for example, manipulate one of the disparate signals based on another. For the particular applications we address in our experiments we shall see that it is the mapping parameters { av , aa} which will be most useful. We will illustrate the details as we go through the experiments. 3 Empirical Results In order to demonstrate the efficacy of the approach we present a series of audio/video analysis experiments of increasing complexity. In these experiments, two sub-space mappings are learned, one from video and another from audio. In all cases, video data is sampled at 30 frames/second. We use both pixel based representations (raw pixel data) and motion based representations (i.e. optical flow [1]). Anandan's optical flow algorithm [1] is a coarse-to-fine method, implemented on a Laplacian pyramid, based on minimizing the sum of squared differences between frames. Confidence measures are derived from fitted quadratic surface principle curvatures. A smoothness constraint is also applied to the final velocity estimates. When raw video is used as an input to the subspace mapper, the pixels are collected into a single vector. The raw video images range in resolution from 240 by 180 (i.e .. 43,200 dimensions) to 320 by 240 (i.e. 76,800 dimensions). When optical flow is used as an input to the sub-space mapper, vector valued flow for each pixel is collected into a single vector, yielding an input vector with twice as many dimensions as pixels. Audio data is sampled at 11.025 KHz. Raw audio is transformed into periodogram coefficients. Periodograms are computed using hamming windows of 5.4 ms duration sampled at 30 Hz (commensurate with the video rate). At each point in time there are 513 periodogram coefficients input to the sub-space mapper. Figure 2: Synthetic image sequence examples (left). Mouth parameters are functionally related to one audio signal. Flow fields horizontal component (center) and vertical component (right). 3.1 A Simple Synthetic Example We begin with a simple synthetic example. The goal of the experiment is to use a video sequence to enhance an associated audio sequence. Figure 2 shows examples from a synthetically generated image sequence of faces (and the associated optical flow field). In the sequence the mouth is described by an ellipse. The parameters of the ellipse are functionally related to a recorded audio signal. Specifically, the area of the ellipse is proportional to the average power of the audio signal (computed over the same periodogram window) while the eccentricity is controlled by the the entropy of the normalized periodogram. Consequently, observed changes in the image sequence are functionally related to the recorded audio signal. It is not necessary (right now) that the relationship be realistic, only that it exists. The associated audio signal is mixed with an interfering, or noise, signal. Their spectra, shown in figure 3 (left), are clearly overlapped. If the power spectrum of the associated and interfering signals were known then the optimal filter for recovering the associated audio sequence is the Wiener filter. It's spectrum is described by Pa(f) H(f) = Pa(f) + Pn(f) (2) where Pa (f) is the power spectrum of the desired signal and Pn (f) is the power spectrum of the interfering signal. In general this information is unknown, but for our experiments it is useful as a benchmark for comparison purposes as it represents an upper bound on performance. That is, in a second-order sense, all filters (including ours) will underperform the Wiener filter. Furthermore, suppose y = Sa + n where Sa is the signal of interest and n is an independent interference signal. It can be shown that ( 2..) - ~ n l _ p2 (3) where p is the correlation coefficient between Sa and the corrupted version y and (ri:) is the signal to noise power ratio (SNR). Consequently given a reference signal and some signal plus interferer we can use the relationships above to gauge signal enhancement. The question we address is that in the absence of knowing the separate power spectra, which are necessary to implement the Wiener filter, how do we compare using the associated video data. It is not immediately obvious how one might achieve signal enhancement by learning a joint subspace in the manner described. Our intuition is as follows. For this simple case it is only the associated audio signal which bears any relationship to the video sequence. Furthermore, the coefficients of the audio projection, exa correspond to spectral coefficients. Our reasoning is that large magnitude coefficients correspond those spectral components which have more signal component than those with small magnitude. Using this reasoning we can construct a filter whose coefficients are proportional to our projection exa. Specifically, we use the following to design our filter H (f) = {J ( lexa (f) I- min( lexa(f)l) ) + 1 - {J . 0 < (J < 1 (4) MI max (lexa(f)I) - min(lexa(f) I) 2' -20.-------~--~--------, 0.5 1.0 1.5 2.0 frequency (KHz) Figure 3: Spectra of audio signals (right). Solid line indicates the desired audio component while the dashed line indicates the interference. where aa (f) are the audio projection coefficients associated with spectral coeffiecient, f. For our experiements, fJ = 0.90, consequently 0.5 ::; HMI (f) ::; 0.95. While somewhat ad hoc the filter is consistent with our reasoning above and, as we shall see, yields good results. Furthermore, because the signal and interferer are known (in our experimental set up) we can compare our results to the unachievable, yet optimal, Wiener filter for this case. In this case the SNR was 0 dB, furthermore as the two signals have significant spectral overlap, signal recovery is challenging. The optimal Wiener filter achieves a signal processing gain of 2.6 dB while the filter constructed as described achieves 2.0 dB (when using images directly) and 2.1 db when using optical flow. 3.2 Video Attribution of Single Audio Source The previous example demonstrated that the audio projection coefficients could be used to reduce an interfering signal. We move now to a different experiment using real data. Figure 4(a) shows a video frame from the sequence used in the next experiment. In the scene there is a person speaking in the foreground, a person moving in the background and a monitor which is flickering. There is a single audio signal source (of the speaker) but several interfering motion fields in the video sequence. Figures 4(b) is the pixel-wise standard deviations of the video sequence while figure 4( c) shows the pixel-wise flow field energy. These images show that there are many sources of change in the image. Note that the most intense changes in the image sequence are associated with the monitor and not the speaker. Our goal with this experiment is to show that via the method described we can properly attribute the region of the video image which is associated with the audio sequence. The intuition is similar to the previous experiment. We expect that large image projection coefficients, a v correspond to those pixels which are related to the audio signal. Figure 4(d) shows the image a v when images are fed directly into the algorithm while figure 4(e) shows the same image when flow-fields are the input. Clearly both cases have detected regions associated with the speaker with the substantive difference being that the use of flow fields resulted in a smoother attribution. 3.3 User-assisted Audio Enhancement We now repeat the initial synthetic experiment of 3.1 using real data. In this case there are two speakers recorded with a single microphone (the speakers were recorded with stereo microphones so as to obtain a reference, but the experiments used a single mixed audio source). Figure Sea) shows an example frame from the video sequence. We now demonstrate the ability to enhance the audio signal in a user-assisted fashion. By selecting data (a) (b) (e) (d) (e) Figure 4: Video attribution: (a) example image, (b) pixel standard deviations, (c) flow vector energy, (d) image of (tv (pixel features), (e) flow field features (a) (b) (c) Figure 5: User assisted audio enhancement: (a) example image, with user chosen regions, (b) image of (tv for region 1, (c) image of (tv region 2 from one box or the other in figure 5(a) we can enhance the voice of the speaker on the left or right. As the original data was collected with stereo microphones we can again compare our result to an approximation to the Wiener filter (neglecting cross channel leakage). In this case, due to the fact that the speakers are male and female, the signals have better spectral separation. Consequently the Wiener filter achieves a better signal processing gain. For the male speaker the Wiener filter improves the SNR by 10.43 dB, while for the female speaker the improvement is 10.5 dB. Using our technique we are able to achieve a 8.9 dB SNR gain (pixel based) and 9.2 dB SNR gain (optic flow based) for the male speaker while for the female speaker we achieve 5.7 and 5.6 dB, respectively. It is not clear why performance is not as good for the female speaker, but figures 5(b) and (c) are provided by way of partial explanation. Having recovered the audio in the user-assisted fashion described we used the recovered audio signal for video attribution (pixel-based) of the entire scene. Figures 5(b) and (c) are the images of the resulting (tv when using the male (b) and female (c) recovered voice signals. The attribution of the male speaker in (b) appears to be clearer than that of (c). This may be an indication that the video cues were not as detectable for the female speaker as they were for the male in this experiment. In any event these results are consistent with the enhancement results described above. 4 Applications There are several practical applications for the techniques described in this paper. One key area is speech recognition. Recent commercial advances in speech recognition rely on careful placement of the microphone so that background sounds are minimized. Results in more natural environments, where the microphone is some distance from the speaker and there is significant background noise, are disappointing. Our approach may prove useful for teleconferencing, where audio and video of multiple speakers is recorded simultaneously. Other applications include broadcast television in situations where careful microphone placement is not possible, or post-hoc processing to enhance the audio channel might prove valuable. For example, if one speaker's microphone at a news conference malfunctions, the voice of that speaker might be enhanced with the aid of video information. 5 Conclusions One key contribution of this paper is to extend the notion of multi-media fusion to complex domains in which the statistical relationships between audio and video is complex and nongaussian. This is claim is supported in part by the results of Slaney and Covell in which canonical correlations failed to detect audio/video synchrony when a spectral representation was used for the audio signal [7]. Previous approaches have attempted to model these relationships using simple models such as measuring the short term correlation between pixel values and the sound signal [6]. The power of the non-parametric mutual information approach allows our technique to handle complex non-linear relationships between audio and video signals. One demonstration of this modeling flexibility, is the insensitivity to the form of the input signals. Experiments were performed using raw pixel intensities as well as optical flows (which is a complex non-linear function of pixel values across time), yielding similar results. Another key contribution is to establish an important application for this approach, video enhanced audio segmentation. Initial experiments have shown that information from the video signal can be used to reduce the noise in a simultaneously recorded audio signal. Noise is reduced without any a priori information about the form of the audio signal or noise. Surprisingly, in our limited experiments, the noise reduction approaches what is possible using a priori knowledge of the audio signal (using Weiner filtering). References [1] P. Anandan. A computational framework and an algorithm for the measurement of visual motion. Tnt. J. compo Vision, 2:283-310, 1989. [2] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, Inc., New York, 1991. [3] J. Fisher and J. Principe. Unsupervised learning for nonlinear synthetic discriminant functions. In D. Casasent and T. Chao, editors, Proc. SPIE, Optical Pattern Recognition VII, volume 2752, pages 2-13, 1996. [4] J. W. Fisher III, A. T. Thier, and P. A. Viola. Learning informative statistics: A nonparametric approach. In S. A. Solla, T. KLeen, and K-R. Mller, editors, Proceedings of 1999 Conference on Advances in Neural Information Processing Systems 12, 1999. [5] J. W. Fisher III and J. C. Principe. A methodology for information theoretic feature extraction. In A. Stuberud, editor, Proceedings of the IEEE International Joint Conference on Neural Networks, 1998. [6] J. Hershey andJ. Movellan. Using audio-visual synchrony to locate sounds. In T. K L. S. A. Solla and K-R. Mller, editors, Proceedings of 1999 Conference on Advances in Neural Information Processing Systems 12, 1999. [7] M. Slaney and M. Covell. Facesync: A linear operator for measuring synchronization of video facial images and audio tracks. In This volume, 2001. [8] P. Viola, N. Schraudolph, and T. Sejnowski. Empirical entropy manipulation for real-world problems. In Proceedings of 1996 Conference on Advances in Neural Information Processing Systems 8, pages 851- 7,1996.
2000
15
1,809
Dendritic compartmentalization could underlie competition and attentional biasing of simultaneous visual stimuli Kevin A. Archie Neuroscience Program University of Southern California Los Angeles, CA 90089-2520 Bartlett W. Mel Department of Biomedical Engineering University of Southern California Los Angeles, CA 90089-1451 Abstract Neurons in area V4 have relatively large receptive fields (RFs), so multiple visual features are simultaneously "seen" by these cells. Recordings from single V 4 neurons suggest that simultaneously presented stimuli compete to set the output firing rate, and that attention acts to isolate individual features by biasing the competition in favor of the attended object. We propose that both stimulus competition and attentional biasing arise from the spatial segregation of afferent synapses onto different regions of the excitable dendritic tree of V 4 neurons. The pattern of feedforward, stimulus-driven inputs follows from a Hebbian rule: excitatory afferents with similar RFs tend to group together on the dendritic tree, avoiding randomly located inhibitory inputs with similar RFs. The same principle guides the formation of inputs that mediate attentional modulation. Using both biophysically detailed compartmental models and simplified models of computation in single neurons, we demonstrate that such an architecture could account for the response properties and attentional modulation of V 4 neurons. Our results suggest an important role for nonlinear dendritic conductances in extrastriate cortical processing. 1 Introduction Neurons in higher regions of visual cortex have relatively large receptive fields (RFs): for example, neurons representing the central visual field in macaque area V4 have RFs up to 5° across (Desimone & Schein, 1987). Such large RFs often contain multiple potentially significant features in a single image, leading to the question: How can these neurons extract information about individual objects? Moran and Desimone (1985) showed that when multiple stimuli are present within the RF of a V4 neuron, attention effectively reduces the RF extent of the cell, so that only the attended feature contributes to its output. Desimone (1992) noted that one way this modulation could be performed is to assign input from each RF subregion to a single dendritic branch of the V4 neuron; modulatory inhibition could then "tum off" branches, so that subregions of the RF could be independently gated. Recent experiments have revealed a more subtle picture regarding both the interactions between simultaneously presented stimuli and the effects of attentional modulation. Recordings from individual V4 neurons have shown that simultanously presented stimuli compete to set the output firing rate (Luck, Chelazzi, Hillyard, & Desimone, 1997; Reynolds, Chelazzi, & Desimone, 1999). For example, consider a cell for which stimulus S, presented by itself, produces a strong response consisting of s spikes, and stimulus W produces a weak response of w spikes. Presenting the two stimuli Sand W together generally produces an output less than s but more than w. Note that the "weak" stimulus W is excitatory for the cell when presented alone, since it increases the response from 0 to w, but effectively inhibitory when presented together with "strong" stimulus S. Attention serves to bias the competition, so that attending to S would increase the output of the V 4 cell (moving it closer to s), while attending to W would decrease the output (moving it closer to w). To describe their results, Reynolds et al. (1999) proposed a mathematical model in which individual stimuli both excite and inhibit the V4 neuron. The sum of excitatory and inhibitory input is acted on by divisive normalization proportional to the total strength of input to produce a competitive interaction between simultaneous stimuli. Attention is then implemented as a multiplicative gain on both excitatory and inhibitory input arising from the attended stimulus. In previous work using biophysically detailed compartmental models of neurons with active dendrites, we observed that increasing the stimulus contrast produced a multiplicative scaling of the tuning curve of a complex cell (Archie & Mel, 2000, Fig. 6g), suggesting an implicit normalization. In the present work, we test the following hypotheses: (1) segregation of input onto different branches of an excitable dendritic tree could produce competitive interactions between simultaneously presented stimuli, and (2) modulatory synapses on active dendrites could be a general mechanism for multiplicative modulation of inputs. 2 Methods We used both biophysically detailed compartmental models and a simplified model of a single cortical neuron to test whether competition and attentional biasing could arise from interactions between excitatory and inhibitory inputs in a nonlinear dendritic tree. An overview of the input segregation common to both classes of model is shown in Fig. 1. Biophysically detailed compartmental model. The detailed model included 4 layers of processing: (1) an LGN cell layer with center-surround RFs; (2) a virtual layer of simplecell-like subunits which were drawn from elongated rows of ON- and OFF-center LGN cells virtual in that the subunit computations were actually carried out in the dendrites of the overlying complex cells, following Mel, Ruderman, and Archie (1998) and Archie and Mel (2000); (3) an 8 x 8 grid of complex cells, each of which contained 4 subunits with progressively shifted positions/phases; and (4) a single V4 cell in the top layer, which received input from the complex cell layer. Layers 3 and 4 are shown in Fig. 2. The LGN was modeled as 4 arrays (ON- and OFF-center, left and right eye) of differenceof-Gaussian spatial filters, as described in Archie and Mel (2000). Responses of the cortical cells were calculated using the NEURON simulation environment (Hines & Carnevale, 1997). Complex cells contained 4 basal branches, each 1 /.Lm in diameter and 150 /.Lm long; one apical branch 5 /.Lm in diameter and 250 /.Lm long; a spherical soma 20 /.Lm in diameter; and an axon 0.5 /.Lm in diameter and 1000 /.Lm long with an initial segment 1 /.Lm in diameter and 20 /.Lm long. Hodgkin-Huxley-style Na+ and K+ conductances were present in the membrane of the entire cell, with lO-fold higher density in the axon (9Na = 0.120 S/cm2,9K = 0.100 S/cm2 ) than in the soma and dendrites (9Na = 0.012 S/cm2 ,9K = 0.010 S/cm2). The V4 cell was modeled with the same parameters as the complex cells, but with 8 basal branches instead of 4. Figure 1: Segregation of excitatory and inhibitory inputs. Two sources of stimulus-driven input are shown, Sl and S2, each corresponding to an independently attendable subregion of the RF of the V4 cell. Note that each source of stimulus-driven input makes both excitatory projections to a specific branch on the V4 cell, and inhibitory projections (through an interneuron) to other branches. Similarly, the modulatory inputs Al and A2 each direct attention to a particular branch; for example, Al adds excitatory modulation to the branch corresponding to the Sl RF subregion and (indirectly) inhibitory modulation to other branches. V4 inhibitory /interneurons Figure 2: Design of the biophysically detailed model. Complex cells were arranged in a grid with overlapping RFs and similar orientation preferences. Each vertical stripe of cells formed an RF subregion to which attention could be directed. Each complex cell within a subregion formed one excitatory and one (indirect) inhibitory connection onto the V4 cell. Synapse locations were assigned at random within a given V 4 branch. All excitatory connections for a given subregion were targeted to a single branch, while the corresponding inhibitory synapses were distributed across all of the other branches of the cell. Attentional modulatory and background synapses, both described in the text, are not shown. Excitatory synapses were modeled as having both voltage-dependent (NMDA) and voltageindependent (AMPAikainate) components, while inhibitory synapses were fast and voltageindependent (GABA-A). All synapses were modeled using the kinetic scheme of Destexhe, Mainen, and Sejnowski (1994), with peak conductance values scaled inversely by the local input resistance to reduce the dependence of local EPSP size on synapse location. The complex cells received input from the LGN, using the spatial arrangement of excitatory and inhibitory inputs described in Archie and Mel (2000), with inhibitory inputs distributed throughout the input branches (rather than, e.g., being restricted to the proximal part of the branch). We have previously shown that this arrangement of inputs produces phase- and contrast-invariant tuning to stimulus orientation, similar to that seen in cortical complex cells. All 64 complex cells had the same preferred orientation, which we will for convenience call vertical. For each stimulus image, each complex cell was simulated for 1 second and the resulting firing rate was used to set the activity level of one excitatory and one inhibitory synapse onto the V 4 cell. The inhibition was assumed to emanate from an implicit inhibitory interneuron in V4. The stimulus-driven inputs to the V4 neuron were modeled as Poisson trains whose mean rate was set to the corresponding complex cell firing rate for excitatory synapses, and 1.3 times the corresponding complex cell firing rate for inhibitory synapses. The inputs were arranged on the V 4 cell so that all of the complex cells with RFs distributed along a vertical stripe of the V4 RF (i.e., aligned with the preferred orientation of the complex cells) formed one subregion and made their excitatory projections to a single branch (Fig. 2). The inhibitory synapse from each complex cell was placed on a different branch than the corresponding excitatory synapse, with the specific location chosen at random. Attention was implemented by placing two modulatory synapses on each branch, one excitatory and one inhibitory. In the absence of attention, all modulatory synapses had a mean event rate of 0.1 Hz. Attention was directed to a particular subregion by increasing the firing rate of the excitatory modulation on the corresponding branch to 100 Hz, and increasing the inhibitory modulation on all other branches to 67 Hz. Each branch of the V4 cell also received a single excitatory synapse with mean firing rate 25 Hz, representing background (non-stimulus driven) input from the cortical network. These synapses provided most of the input needed for the cell to fire action potentials, while the stimulus-driven inputs modulated the firing rate. The rationale for the spatial arrangement of synapses was that co aligned complex cells with overlapping RFs would have correlated responses over the ensemble of images seen during early postnatal development, and would thus tend to congregate within the same dendritic subunits according to Hebbian developmental principles. Similarly, excitatory synapses would tend to avoid inhibitory synapses driven by the same stimuli, since if the two are near each other on the dendrite, the efficacy of the excitation is systematically reduced by the corresponding inhibition. Sum of squared filters model. We have previously proposed that an individual cortical pyramidal neuron may carry out high-order computations that roughly fit the form of an energy model, i.e., a sum of half-squared linear filters, with electrotonic ally isolated regions of the dendritic tree performing the quadratic subunit computations. Only excitatory inputs were previously considered, leaving open the question of how inhibition might fit in such a model. An obvious implementation of inhibition is to simply subtract the mean firing rates of inhibitory inputs, just as excitatory inputs are added. The sum-of-squares model thus has the form: f(x) = Lj((LiEBj WiXi)+)2, where y+ denotes y if y 2: 0,0 otherwise; 8 j is the set of inputs i that project to branch j; and Wi is + 1 if input i is excitatory, -1 if inhibitory. We considered both a "paper-and-pencil" model, in which we hand-selected input values for each stimulus with an eye towards ease of interpretation; and also a model in which the tabulated complex cell output rates (from layer 3 of the detailed model) were strong stimulus weak stimulus o Q) en 35 E 25 o 10 N ~ ~ 15 '5. en 1 05 ~ rt strong weak .. t single stimulus (no attention) attention away attention to strong attention to weak t + ~ away ~ to strong to weak ¥ .. attention Figure 3: Results from the biophysically detailed model. In the top row, strong and weak visual stimuli are shown at left, and combined stimuli in three attentional conditions are indicated at right. Bar graph shows response of simulated V4 cell under each of these 5 conditions, averaged over 192 runs. Combined stimulus in the absence of attention yields output between the responses to either stimulus alone. Attention to either the strong or the weak stimulus pushes the cell's response toward the individual response for that stimulus. used as input. 3 Results Detailed model. A strong stimulus (a vertical bar) and a weak stimulus (a bar of the same length, turned 600 from vertical) were selected. Figure 3 shows the stimulus images and simulated V 4 cell response for each stimulus alone and for the combined stimulus in various attentional states. In the absence of attention within the receptive field (attention away), the response of the cell to the combined image lay between the responses to the strong image alone or the weak image alone. This intermediate response is consistent with the responses of many V 4 cells under similar conditions, and is the result of the competition between excitatory and inhibitory inputs: because of the spatial segregation, inhibitory synapses driven by one stimulus selectively undermine the effectiveness of excitation due to the other. This competition between stimuli was also biased by attentional modulation (Fig. 3). Attending to the strong stimulus elevated the response to the combined image compared to the condition where attention was directed away, thus bringing the response closer to the response to the strong stimulus alone. Similarly, attention to the weak stimulus lowered the response to the combined stimulus. Sum of squared filters. We used a 4-subunit sum-of-squares model for illustrative purposes. A stimulus in this model is a 4-dimensional vector, with each component representing the total input (excitatory positive, inhibitory negative) to a single subunit. Most stimuli tested had equal excitatory and inhibitory influence, so that the sum of the components was zero, and had excitatory influence confined to one subunit (i.e., the features were small compared to the entire V 4 RF). One example set of stimulus vectors follows, with x indicating that stimulus x is attended (implemented by adding a modulatory value of +1 to the attended branch, and -1 to all others): s = [5 -2 -1 -2] ---+ , , , w = [-1, -1, -1, 3] ---+ s + w = [4, -3, -2, 1] S + w = [5, -4, -3,0] s + w = [3, -4, -3, 2] 25 +0+0+0 0+0+0+9 16+0+ 0+ 1 25 +0+0+0 9+0+0+4 = 25 =9 = 17 = 25 = 13 This simple model gave qualitiatively correct results. Some stimulus combinations we considered gave results inconsistent with the biased-competition model e.g., the above situation with w = [-1, -1,3, -1]. The most common type of failure was that attending to the strong stimulus in the combined image led to a larger response than that produced by the strong stimulus alone. We also saw this happen for certain parameter sets in the biophysically detailed model, as described below; a similar result is seen in some of the data of Reynolds et ai. (1999). Nonetheless, this simple model gives qualitatively correct results for a surprisingly large set of input combinations. When the complex-cell output from the detailed model was used as input to a sum-ofsquared-filters model with 8 subunits, results qualitatively similar to the detailed simulation results were obtained. For the stimuli shown in Fig. 3, the following results were seen (all responses in arbitrary units): with no attention, strong: 109, weak: 2.57, combined: 84.3; combined, attention to strong: 106; to weak: 80. This simplified model, like the biophysically detailed model, is rather sensitive to the values used for the modulatory inputs: with slightly different values, for example, attending to the strong stimulus makes the response to the combined image higher than the response to the strong stimulus alone. In continuing studies, we are working to determine whether this parameter sensitivity is a general feature of such models. 4 Discussion A variety of previous models for attention have considered how the RF of cortical neurons can be dynamically modulated (Olshausen, Anderson, & Van Essen, 1993; Niebur & Koch, 1994; Salinas & Abbott, 1997; Lee, Itti, Koch, & Braun, 1999). Our model, an extension of the proposal of Desimone (1992), specifies a biophysical mechanism for the multiplicative gain used in previous models (Salinas & Abbott, 1997; Reynolds et aI., 1999), and suggests that both the stimulus competition and attentional effects seen in area V4 could be implemented by a straightforward mapping of stimulus-driven and modulatory afferents, both excitatory and inhibitory, onto the dendrites of V 4 neurons. The results from the sumof-squared-filters models demonstrate that even a crude model of computation in single neurons can account for the complicated response properties of V4 neurons, given several quasi-independent nonlinear dendritic subunits and a suitable spatial arrangement of synapses. In continuing work, we are exploring the large space of parameters (e.g., density of various ionic conductances, ratio of inhibition to excitation, strength of modulatory inputs) to determine which aspects of the response properties are fundamental to the model, and which are accidents of the particular parameters chosen. This work should help to identify strong vs. weak experimental predictions regarding the contributions of dendritic subunit computation to the response properties of extrastriate neurons. Acknowledgements Supported by NSF. Reference Archie, K. A., & Mel, B. W (2000). A model for intradendritic computation of binocular disparity. Nature Neurosci., 3(1), 54-63. Connor, C. E., Preddie, D. c., Gallant, J. L., & Essen, D. C. V. (1997). Spatial attention effects in macaque area V4. 1. Neurosci., 17(9), 3201-3214. Desimone, R., & Schein, S. J. (1987). Visual properties of neurons in area V4 of the macaque: sensitivity to stimulus form. 1. Neurophysiol., 57(3),835-868. Desimone, R. (1992). Neural circuits for visual attention in the primate brain. In Carpenter, G. A., & Grossberg, S. (Eds.), Neural Networks for Vision and Image Processing, chap. 12, pp. 343-364. MIT Press, Cambridge, MA. Desimone, R. (1998). Visual attention mediated by biased competition in extrastriate visual cortex. Phil. Trans. R. Soc. Lond. B, 353,1245-1255. Destexhe, A., Mainen, Z., & Sejnowski, T. 1. (1994). Synthesis of models for excitable membranes, synaptic transmission and neuromodulation using a common kinetic formalism. 1. Comput. Neurosci., 1, 195-230. Destexhe, A., & Pare, D. (1999). Impact of network activity on the integrative properties of neocortical pyramidal neurons in vivo. 1. Neurophysiol., 81, 1531-1547. Hines, M. L., & Carnevale, N. T. (1997). The NEURON simulation environment. Neural Comput.,9,1179-1209. Lee, D. K., Itti, L., Koch, C., & Braun, J. (1999). Attention activates winner-take-all competition among visual filters. Nature Neurosci., 2(4), 375-381. Luck, S. 1., Chelazzi, L., Hillyard, S. A., & Desimone, R. (1997). Neural mechanisms of spatial selective attention in areas VI, V2, and V4 of macaque visual cortex. 1. Neurophysiol., 77, 24-42. McAdams, C. J., & Maunsell, J. H. R. (1999). Effects of attention on orientation-tuning functions of single neurons in macaque cortical area V4. 1. Neurosci., 19(1),431441. Mel, B. W (1999). Why have dendrites? A computational perspective. In Stuart, G., Spruston, N., & Hausser, M. (Eds.), Dendrites, chap. 11, pp. 271-289. Oxford University Press. Mel, B. W, Ruderman, D. L., & Archie, K. A. (1998). Translation-invariant orientation tuning in visual "complex" cells could derive from intradendritic computations. 1. Neurosci., 18(11),4325-4334. Moran, 1., & Desimone, R. (1985). Selective attention gates visual processing in the extrastriate cortex. Science, 229, 782-784. Motter, B. C. (1993). Focal attention produces spatially selective processing in visual cortical areas VI, V2, and V4 in the presence of competing stimuli. 1. Neurophysiol., 70(3),909-919. Niebur, E., & Koch, C. (1994). A model for the neuronal implementation of selective visual attention based on temporal correlation among neurons. 1. Comput. Neurosci., 1, 141-158. Olshausen, B. A., Anderson, C. H., & Van Essen, D. C. (1993). A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information. 1. Neurosci., 13(11),4700-4719. Reynolds, J. H., Chelazzi, L., & Desimone, R. (1999). Competitive mechanisms subserve attention in macaque areas V2 and V4. 1. Neurosci., 19(5), 1736-1753. Salinas, E., & Abbott, L. F. (1997). Invariant visual responses from attentional gain fields. 1. Neurophysiol., 77,3267-3272.
2000
150
1,810
Smart Vision Chip Fabricated Using Three Dimensional Integration Technology H.Kurino, M.Nakagawa, K.W.Lee, T.Nakamura, Y.Yamada, K.T.Park and M.Koyanagi Dept. of Machine Intelligence and Systems Engineering, Tohoku University 01, Aza-Aramaki, Aoba-ku, Sendai 980-8579, Japan kurino@sd.mech.tohoku.ac.jp Abstract The smart VISIOn chip has a large potential for application in general purpose high speed image processing systems. In order to fabricate smart vision chips including photo detector compactly, we have proposed the application of three dimensional LSI technology for smart vision chips. Three dimensional technology has great potential to realize new neuromorphic systems inspired by not only the biological function but also the biological structure. In this paper, we describe our three dimensional LSI technology for neuromorphic circuits and the design of smart vision chips. 1 Introduction Recently, the demand for very fast image processing systems with real time operation capability has significantly increased. Conventional image processing systems based on the system level integration of a camera and a digital processor, do not have the potential for application in general purpose consumer electronic products. This is simply due to the cost, size and complexity of these systems. Therefore the smart vision chip will be an inevitable component of future intelligent systems. In smart vision chips, 2D images are simultaneously processed in parallel. Therefore very high speed image processing can be realized. Each pixel includes a photo-detector. In order to receive a light signal as much as possible, the photo-detector should occupy a large proportion of the pixel area. However the successive processing circuits must become larger in each pixel to realize high level image processing. It is very difficult to achieve smart vision chips by using conventional two dimensional (2D) LSI technology because such smart vision chips have low fill-factor and low resolution. This problem can be overcome if three dimensional (3D) integration technology can be employed for the smart vision chip. In this paper, we propose a smart vision chip fabricated by three dimensional integration technology. We also discuss the key technologies for realizing three dimensional integration and preliminary test results of three dimensional image sensor chips. 2 Three Dimensional Integrated Vision Chips Figure 1 shows the cross-sectional structure of the three dimensional integrated vision chip. Several circuit layers with different functions are stacked into one chip in 3D LSI. For example, the first layer consists of a photo detector array acting like photo receptive cells in the retina, the second layer is horizontal/bipolar cell circuits, the third layer is ganglion cell circuits and so on. Each circuit layer is stacked and electrically connected vertically using buried interconnections and micro bumps. By using three dimensional integration technology, a photo detector can be formed with a high fill-factor and high resolution, because several successive processing circuits with large areas are formed on the lower layers underneath the photo detector layer. Every photo detector is directly connected with successive processing circuits (ie. horizontal and bipolar cell circuits) in parallel via the vertical interconnections. The signals in every pixel are simultaneously transferred in the vertical direction and processed in parallel in each layer. Therefore high performance real time vision chips can be realized. We considered the 3D LSI suitable for realizing neuromorphic LSI, because the three dimensional structure is quite similar to the structure of the retina or cortex. Three dimensional technology will realize new neuromorphic systems inspired by not only the biological function but also the biological structure. Glass Wafer Photoreceptors Layer Horizontal and Bipolar Cells Layer Ganglion Cells Layer Fig.1 Cross-sectional structure of three dimensional vision chip. Figure 2 shows the neuromorphic analog circuits implemented into 3D LSI. The circuits are divided into three circuit layers. Photodiodes and photocircuits are designed on the first layer. Horizontal/bipolar cell circuits and ganglion cells are on .. _t~~ _ ~~d a ... n? _ }!2- !aJ~~' ~ !~e~c!i~~ry.:.. ~~Sh ..... circuit layer is fabricated : [If.t Lv..: r: I I I I I ~ I I ~ -_ ...... Photodiode .a - '::_- ":::::_- '::':'':::::! _ _ _ __ , _____________ _ Third LOU I -= ,• ~ .-... -.--..... ---.... ---..... --. ~ Fig.2 Circuit diagram of three dimensional vision chip. Third Layer Fig.3 Layout of the three dimensional vision chip. on different Si wafers and stacked into a 3D LSI. Light signals are converted into electrical analog signals by photodiodes and photocircuits on the first layer. The electric signals are transferred from the first layer to the second layer through the vertical interconnections. The operational amplifiers and resistor network on the
2000
151
1,811
A productive, systematic framework for the representation of visual structure Shimon Edelman 232 Uris Hall, Dept. of Psychology Cornell University Ithaca, NY 14853-7601 se37@cornell.edu Nathan Intrator Institute for Brain and Neural Systems Box 1843, Brown University Providence, RI 02912 N athan_Intrator@brown. edu Abstract We describe a unified framework for the understanding of structure representation in primate vision. A model derived from this framework is shown to be effectively systematic in that it has the ability to interpret and associate together objects that are related through a rearrangement of common "middle-scale" parts, represented as image fragments. The model addresses the same concerns as previous work on compositional representation through the use of what+where receptive fields and attentional gain modulation. It does not require prior exposure to the individual parts, and avoids the need for abstract symbolic binding. 1 The problem of structure representation The focus of theoretical discussion in visual object processing has recently started to shift from problems of recognition and categorization to the representation of object structure. Although view- or appearance-based solutions for these problems proved effective on a variety of object classes [1], the "holistic" nature of this approach - the lack of explicit representation of relational structure - limits its appeal as a general framework for visual representation [2]. The main challenges in the processing of structure are productivity and systematicity, two traits commonly attributed to human cognition. A visual system is productive if it is open-ended, that is, if it can deal effectively with a potentially infinite set of objects. A visual representation is systematic if a well-defined change in the spatial configuration of the object (e.g., swapping top and bottom parts) causes a principled change in the representation (e.g., the interchange of the representations of top and bottom parts [3, 2]). A solution commonly offered to the twin problems of productivity and systematicity is compositional representation, in which symbols standing for generic parts drawn from a small repertoire are bound together by categorical symbolically coded relations [4]. 2 The Chorus of Fragments In visual representation, the need for symbolic binding may be alleviated by using location in the visual field in lieu of the abstract frame that encodes object structure. Intuitively, the constituents of the object are then bound to each other by virtue of residing in their proper places in the visual field; this can be thought of as a pegboard, whose spatial structure supports the arrangement of parts suspended from its pegs. This scheme exhibits shallow compositionality, which can be enhanced by allowing the "pegboard" mechanism to operate at different spatial scales, yielding effective systematicity across levels of resolution. Coarse coding the constituents (e.g., representing each object fragment in terms of its similarities to some basis shapes) will render the scheme productive. We call this approach to the representation of structure the Chorus of Fragments (CoF; [5]). 2.1 Neurobiological building blocks What+ Where cells. The representation of spatially anchored object fragments postulated by the CoF model can be supported by what+where neurons, each tuned both to a certain shape class and to a certain range of locations in the visual field. Such cells have been found in the monkey in areas V 4 and posterior IT [6], and in the prefrontal cortex [7]. Attentional gain fields. To decouple the representation of object structure from its location in the visual field, one needs a version of the what+where mechanism in which the response of the cell depends not merely on the location of the stimulus with respect to fixation (as in classical receptive fields), but also on its location with respect to the focus of attention. Indeed, modulatory effects of object-centered attention on classical RF structure (gain fields) have been found in area V 4 [8]. 2.2 Implemented model Our implementation of the CoF model involves what+where cells with attentionmodulated gain fields, and is aimed at productive and systematic treatment of composite shapes in object-centered coordinates. It operates directly on gray-level images, pre-processed by a model of the primary visual cortex [9], with complexcell responses modified to use the MAX operation suggested in [10]. In the model, one what+where unit is assigned to the top and one to the bottom fragment of the visual field, each extracted by an appropriately configured Gaussian gain profile (Figure 2, left). The units are trained (1) to discriminate among five objects, (2) to tolerate translation within the hemifield, and (3) to provide an estimate of the reliability of its output, through an autoassociation mechanism attempting to reconstruct the stimulus image [11, 12]. Within each hemifield, the five outputs of a unit can provide a coarse coding of novel objects belonging to the familiar category, in a manner useful for translation-tolerant recognition [13]. The reliability estimate carries information about category, allowing outputs for objects from other categories to be squelched. Most importantly, due to the spatial localization of the unit's receptive field, the system can distinguish between different configurations of the same shapes, while noting the fragment-wise similarities. We assume that during learning the system performs multiple fixations of the target object, effectively providing the what+where units with a basis for spanning the "1 above center" I I L_ ---Wld '~' ~~~~~~.-+ _---'''6'' what i r! "1" j , "1 below center" wherel "9" "something below center" o Figure 1: Left: the CoF model conceptualized as a "computation cube" trained to distinguish among three fragments (1, 6, 9), each possibly appearing at two locations (above or below the center of attention). A parallel may be drawn between the computation cube and a cortical hypercolumn; in the inferotemporal cortex, cells selective for specific shapes may be arranged in columns, with the dimension perpendicular to the cortical surface encoding different variants of the same shape [14]. It is not known whether the attention-centered location of the shape, which affects the responses of V 4 cells [8], is mapped in an orderly fashion onto some physical dimension(s) of the cortex. Right: the estimation of the marginal probabilities of shapes, which can be used to decide whether to allocate a unit coding for their composition, can be carried out simply by summing the activities of units along the different dimensions of the computation cube. space of stimulus translations. It is up to the model, however, to figure out that the objects may be composed of recurring fragments, and to self-organize in a manner that would allow it to deal with novel configurations of those fragments. This problem, which arises both at the level of fragments and of their constituent features, can be addressed within the Minimum Description Length (MDL) framework. Specifically, we propose to construct receptive fields (RFs) for composite objects so as to capture the deviation from independence between the probability distributions of the responses of RFs tuned to their fragments. This implies a savings in the description length of the composite object. Suppose, for example, that r /l is the response of a unit tuned roughly to the top half of the character 6 and r h - the response of a unit tuned to its bottom half. The construction of a more complex RF combining the responses of these two units will be justified when P(r/l,rh)>> P(r/l)P(rh) (1) or, more practically, when some measure of deviation from independence between P(rfd and P(rh) is large (the simplest such measure would be the covariance, namely, the second moment of the joint distribution but we believe that higher moments may also be required, as suggested by the extensive work on measuring deviation from Gaussian distributions). By this criterion, a composite RF will be constructed that recognizes the two "parts" of the character 6 when they are appropriately located: the probability on the LHS of eq. 1 in that case would be proportional to 1/10, while the probability of the RHS would be proportional to 1/100 (assuming that all characters are equiprobable, and that their fragments never appear in isolation). At the same time, a composite RF tuned, say, to 6 above 3 (see section 3) will not be allocated, because the probability of such a complex feature as measured by either the RHS or the LHS of eq. 1 is proportional to 1/100. We note that this feature analysis can be performed on the marginal probabilities of the corresponding fragments, which are by definition less sensitive to image parameters such as the exact location or scale, and can be based on a family of features (cf. Figure 1). A discussion of this approach and of its relationship to the reconstruction constraint we impose when training the fragment-tuned modules is beyond the scope of this paper. A parallel can be drawn between the MDL framework just outlined and the findings concerning what+where cells and gain fields in the shape processing pathway in the monkey cortex. Under the interpretation we propose, the features at all levels of the hierarchy are coarsely coded, and each feature is associated with a rough location in the visual field, so that composite features necessarily represent more complex spatial structure than their constituents, without separately implemented binding, and without a combinatorial proliferation of features. The computational experiments described below concentrate on these novel characteristics of our model, rather than on the standard MDL machinery. Reconstruction error Classification (modulatory signal) (output signal) Figure 2: The CoF model, trained on five composite objects (lover 6,2 over 7, etc.). Left: the model consists of two what+where units, responsible for the bottom and the top fragments of the stimulus, respectively. Gain fields (boxes labeled below center and above center) steer each input fragment to the appropriate unit. The learning mechanism (RIC, for Reconstruction/Classification) was implemented as a radial basis function network. The reconstruction error (~) modulates the classification outputs. Right: training the model, viewed as a computation cube. Multiple fixations of the stimulus (of which three are illustrated), along with Gaussian windows selecting stimulus fragments, allow the system to learn what+where responses. A cell would only be allocated to a given fragment if it recurs in the company of a variety of other fragments, as warranted by the ratio between their joint probability and the product of the corresponding marginal probabilities (cf. eq. 1 and Figure 1, right; this criterion has not yet been incorporated into the CoF training scheme). 3 Computational experiments We conducted three experiments that examined the properties of the structured representations emerging from the CoF model. The first experiment (reported elsewhere [13)), involved animal-like shapes and aimed at demonstrating basic productivity and systematicity. We found that the CoF model is capable of systematically interpreting composite objects to which it was not previously exposed (for example, a half-goat and half-lion chimera is represented as such, by an ensemble of units trained to discriminate between three altogether different animals). In the second experiment, a version of the CoF model (Figure 2) was charged with learning to reuse fragments of the members of the training set five bipartite objects composed of shapes of numerals from 1 through 0 in interpreting novel compositions of the same fragments. The gain field mechanism built into the CoF model allowed it to respond largely systematically to the learned fragments even when these were shown in novel locations, both absolute, and relative (Figure 3, left). The third experiment addressed a basic prediction of the CoF model, stemming from its reliance on what+where mechanisms: the interaction between effects of shape and location in object representation. Such interaction had been found in a psychophysical study [15], in which the task was 4-alternative forced-choice classification of two-part stimuli consisting of simple geometric shapes (cube, cylinder, sphere, cone). The composite stimuli were defined by two variables, shape and location, each of which could be same, neutral, or different in the prime and the target (yielding 9 conditions altogether). Response times of human subjects revealed effects of shape and location (what+where) , but not of shape alone; the pattern of priming across the nine conditions was replicated by the CoF model (correlation between model and human data r = 0.85), using the same stimuli as in the psychophysical experiment. 4 Discussion Because CoF relies on retinotopy rather than on abstract binding, its representation of spatial structure is location-specific; so is the treatment of structure by the human visual system, as indicated by a number of findings. For example, priming in a subliminal perception task was found to be confined to a quadrant of the visual field [16]. The notion that the representation of an object may be tied to a particular location in the visual field where it is first observed is compatible with the concept of object file, a hypothetical record created by the visual system for every encountered object, which persists as long as the object is observed. Moreover, location (as it figures in the CoF model) should be interpreted relative to the focus of attention, rather than retinotopically [17]. The idea that global relationships (hence, large-scale structure) have precedence over local ones [18], which is central to our approach, has withstood extensive testing in the past two decades. Even with the perceptual salience of the global and local structure equated, subjects are able to process the relations among elements before the elements themselves are identified [19]. More generally, humans are limited in their ability to represent spatial structure, in that the representation of spatial relations requires spatial attention. For example, visual search is difficult when mean mean correct ~ entropy above 0 . 9 rate per unit 0.8 2 . 00 above ___ 1.1 ____ 0 . 7 I 1. 75 f \ 0 . 6 f \ 1.50 f \ 0.5 f \ 1.25 1 2 3 4 5 6 7 8 9 0 f I!J.... f , below 0.4 f i!I, 1.00 0.3 !J 'c 0 . 75 below __ 1 ____ 1_0 . 2 ~ 0 . 50 0.1 0 . 25 1 2 3 4 5 6 7 8 9 0 0 . 1 0.05 0.01 0.005 Figure 3: Left: the response of the CoF model to a novel composite object, 6 (which only appeared in the bottom position in the training set) over 3 (which was only seen in the top position). The interpretations offered by the model were correct in 94 out of the 100 possible test cases (10 digits on top x 10 digits on the bottom) in this experiment. Note: in the test scenario, each unit (above and below) must be fed each of the two input fragments (above and below), hence the 20 bars in the plots of the model's output. Right: the non-monotonic dependence of the mean entropy per output unit (ordinate axis on the right; dashed line) on the spread constant a of the radial basis functions (abscissa) indicates that entropy alone should not be used as a training criterion in object representation systems. targets differ from distractors only in the spatial relation between their elements, as if " ... attention is required to bind features ... " [20]. The CoF model offers a unified framework, rooted in the MDL principle, for the understanding of these behavioral findings and of the functional significance of what+where receptive fields and attentional gain modulation. It extends the previous use of gain fields in the modeling of translation invariance [21] and of objectcentered herni-neglect [22], and highlights a parallel between whaHwhere cells and probabilistic approaches to structure representation in computational vision (e.g., [23]). The representational framework we described is both productive and effectively systematic. Specifically, it has the ability, as a matter of principle, to recognize as such objects that are related through a rearrangement of mesoscopic parts, without being taught those parts individually, and without the need for abstract symbolic binding. References [1] S. Edelman. Computational theories of object recognition. Trends in Cognitive Science, 1:296- 304, 1997. [2] J. E. Hummel. Where view-based theories of human object recognition break down: the role of structure in human shape perception. In E. Dietrich and A. Markman, eds., Cognitive Dynamics: conceptual change in humans and machines, ch. 7. Erlbaum, Hillsdale, NJ, 2000. [3] R. F. Hadley. Cognition, systematicity, and nomic necessity. Mind and Language, 12:137-153, 1997. [4] E. Bienenstock, S. Geman, and D. Potter. Compositionality, MDL priors, and object recognition. In M. C. Mozer, M. I. Jordan, and T. Petsche, editors, NIPS 9. MIT Press, 1997. [5] S. Edelman. Representation and recognition in vision. MIT Press, Cambridge, MA, 1999. [6] E. Kobatake and K. Tanaka. Neuronal selectivities to complex object features in the ventral visual pathway of the macaque cerebral cortex. J. Neurophysiol., 71:856- 867, 1994. [7] S. C. Rao, G. Rainer, and E. K. Miller. Integration of what and where in the primate prefrontal cortex. Science, 276:821- 824, 1997. [8] C. E. Connor, D. C. Preddie, J. L. Gallant, and D. C. Van Essen. Spatial attention effects in macaque area V4. J. of Neuroscience, 17:3201- 3214, 1997. [9] D. J. Heeger, E. P. Simoncelli, and J. Anthony Movshon. Computational models of cortical visual processing. Proc. Nat. Acad. Sci., 93:623- 627, 1996. [10] M. Riesenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature Neuroscience, 2:1019- 1025, 1999. [11] D. Pomerleau. Input reconstruction reliability estimation. In C. L. Giles, S. J. Hanson, and J. D. Cowan, editors, NIPS 5, pages 279- 286. Morgan Kaufmann, 1993. [12] I. Stainvas, N. Intrator, and A. Moshaiov. Improving recognition via reconstruction, 2000. preprint. [13] S. Edelman and N. Intrator. (Coarse Coding of Shape Fragments) + (Retinotopy) ~ Representation of Structure. Spatial Vision, 13:255- 264, 2000. [14] I. Fujita, K. Tanaka, M. Ito, and K. Cheng. Columns for visual features of objects in monkey inferotemporal cortex. Nature, 360:343- 346, 1992. [15] S. Edelman and F. N. Newell. On the representation of object structure in human vision: evidence from differential priming of shape and location. CSRP 500, University of Sussex, 1998. [16] M. Bar and I. Biederman. Subliminal visual priming. Psychological Science, 9(6):464469, 1998. [17] A. Treisman. Perceiving and re-perceiving objects. American Psychologist, 47:862875, 1992. [18] D. Navon. Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9:353- 383, 1977. [19] B. C. Love, J. N. Rouder, and E. J. Wisniewski. A structural account of global and local processing. Cognitive Psychology, 38:291- 316, 1999. [20] A. M. Treisman and N. G. Kanwisher. Perceiving visually presented objects: recognition, awareness, and modularity. Current Opinion in Neurobiology, 8:218- 226, 1998. [21] E. Salinas and L. F. Abbott. Invariant visual responses from attentional gain fields. J. of Neurophysiology, 77:3267- 3272, 1997. [22] S. Deneve and A. Pouget. Neural basis of object-centered representations. In M. I. Jordan, M. J. Kearns, and S. A. Solla, editors, NIPS 11, Cambridge, MA, 1998. MIT Press. [23] M. C. Burl, M. Weber, and P. Perona. A probabilistic approach to object recognition using local photometry and global geometry. In Proc. 4th Europ. Conf. Comput. Vision, H. Burkhardt and B. Neumann (Eds.), LNCS-Series Vol. 1406- 1407, Springer- Verlag, pages 628- 641, June 1998.
2000
152
1,812
Convergence of Large Margin Separable Linear Classification Tong Zhang Mathematical Sciences Department IBM TJ. Watson Research Center Yorktown Heights, NY 10598 tzhang@watson.ibm.com Abstract Large margin linear classification methods have been successfully applied to many applications. For a linearly separable problem, it is known that under appropriate assumptions, the expected misclassification error of the computed "optimal hyperplane" approaches zero at a rate proportional to the inverse training sample size. This rate is usually characterized by the margin and the maximum norm of the input data. In this paper, we argue that another quantity, namely the robustness of the input data distribution, also plays an important role in characterizing the convergence behavior of expected misclassification error. Based on this concept of robustness, we show that for a large margin separable linear classification problem, the expected misclassification error may converge exponentially in the number of training sample size. 1 Introduction We consider the binary classification problem: to determine a label y E {-1, 1} associated with an input vector x. A useful method for solving this problem is by using linear discriminant functions. Specifically, we seek a weight vector wand a threshold () such that wT x < () if its label y = -1 and wT x ~ () if its label y = 1. In this paper, we are mainly interested in problems that are linearly separable by a positive margin (although, as we shall see later, our analysis is suitable for non-separable problems). That is, there exists a hyperplane that perfectly separates the in-class data from the out-ofclass data. We shall also assume () = 0 throughout the rest of the paper for simplicity. This restriction usually does not cause problems in practice since one can always append a constant feature to the input data x, which offset the effect of (). For linearly separable problems, given a training set of n labeled data (X1,yl), .. . ,(xn,yn), Vapnik recently proposed a method that optimizes a hard margin bound which he calls the "optimal hyperplane" method (see [11]). The optimal hyperplane Wn is the solution to the following quadratic programming problem: . 1 2 mln-w w 2 (1) For linearly non-separable problems, a generalization of the optimal hyperplane method has appeared in [2], where a slack variable f.i is introduced for each data point (xi, yi) for i = 1, ... ,n. We compute a hyperplane Wn that solves min~wTw+CLf.i s.t. wTxiyi 2: I-f.i, f.i 2: 0 fori = 1, ... ,no (2) w,~ 2 . , Where C > 0 is a given parameter (also see [11]). In this paper, we are interested in the quality of the computed weight Wn for the purpose of predicting the label y of an unseen data point x. We study this predictive power of Wn in the standard batch learning framework. That is, we assume that the training data (xi, yi) for i = 1, ... n are independently drawn from the same underlying data distribution D which is unknown. The predictive power of the computed parameter Wn then corresponds to the classification performance of Wn with respect to the true distribution D. We organize the paper as follows. In Section 2, we briefly review a number of existing techniques for analyzing separable linear classification problems. We then derive an exponential convergence rate of misclassification error in Section 3 for certain large margin linear classification. Section 4 compares the newly derived bound with known results from the traditional margin analysis. We explain that the exponential bound relies on a new quantity (the robustness of the distribution) which is not explored in a traditional margin bound. Note that for certain batch learning problems, exponential learning curves have already been observed [10]. It is thus not surprising that an exponential rate of convergence can be achieved by large margin linear classification. 2 Some known results on generalization analysis There are a number of ways to obtain bounds on the generalization error of a linear classifier. A general framework is to use techniques from empirical processes (aka VC analysis). Many such results that are related to large margin classification have been described in chapter 4 of [3]. The main advantage of this framework is its generality. The analysis does not require the estimated parameter to converge to the true parameter, which is ideal for combinatorial problems. However, for problems that are numerical in natural, the potential parameter space can be significantly reduced by using the first order condition of the optimal solution. In this case, the VC analysis may become suboptimal since it assumes a larger search space than what a typical numerical procedure uses. Generally speaking, for a problem that is linearly separable with a large margin, the expected classification error of the computed hyperplane resulted from this analysis is of the order Oeo~n V Similar generalization bounds can also be obtained for non-separable problems. In chapter 10 of [11], Vapnik described a leave-one-out cross-validation analysis for linearly separable problems. This analysis takes into account the first order KKT condition of the optimal hyperplane W n . The expected generalization performance from this analysis is O( ~) , which is better than the corresponding bounds from the VC analysis. Unfortunately, this technique is only suitable for deriving an expected generalization bound (for example, it is not useful for obtaining a PAC style probability bound). Another well-known technique for analyzing linearly separable problems is the mistake bound framework in online learning. It is possible to obtain an algorithm with a small generalization error in the batch learning setting from an algorithm with a small online mistake 'Bounds described in [3] would imply an expected classification error of 0(108: n), which can be slightly improved (by a log n factor) if we adopt a slightly better covering number estimate such as the bounds in [12, 14]. bound. The readers are referred to [6] and references therein for this type of analysis. The technique may lead to a bound with an expected generalization performance of O(~). Besides the above mentioned approaches, generalization ability can also be studied in the statistical mechanical learning framework. It was shown that for linearly separable problems, exponential decrease of misclassification error is possible under this framework [1, 5, 7, 8]. Unfortunately, it is unclear how to relate the statistical mechanical learning framework to the batch learning framework considered in this paper. Their analysis, employing approximation techniques, does not seem to imply small sample bounds which we are interested in. The statistical mechanical learning result suggests that it may be possible to obtain a similar exponential decay of misclassification error in the batch learning setting, which we prove in the next section. Furthermore, we show that the exponential rate depends on a quantity that is different than the traditional margin concept. Our analysis relies on a PAC style probability estimate on the convergence rate of the estimated parameter from (2) to the true parameter. Consequently, it is suitable for non-separable problems. A direct analysis on the convergence rate of the estimated parameter to the true parameter is important for problems that are numerical in nature such as (2). However, a disadvantage of our analysis is that we are unable to directly deal with the linearly separable formulation (1). 3 Exponential convergence We can rewrite the SVM formulation (2) by eliminating e as: 12: T" AT Wn(A) = argmin f(w x' y' - 1) + -w w, w n 2 i where A = 1/(nC) and z :S 0, z > O. (3) Denote by D the true underlying data distribution of (x, y), and let w. (A) be the optimal solution with respect to the true distribution as: W.(A) = arg inf EDf(wT xy - 1) + ~wT w. (4) w 2 Let w. be the solution to w. = arginf ~w2 S.t. EDf(wT xy - 1) = 0, (5) w 2 which is the infinite-sample version of the optimal hyperplane method. Throughout this section, we assume Ilw.112 < 00, and EDllxl12 < 00. The latter condition ensures that EDf( wT xy - 1) :S IIwl12ED Ilx 112 + 1 exists for all w. 3.1 Continuity of solution under regularization In this section, we show that Ilw. (A) - w.112 -+ 0 as A -+ O. This continuity result allows us to approximate (5) by using (4) and (3) with a small positive regularization parameter A. We only need to show that within any sequence of A that converges to zero, there exists a subsequence Ai -+ 0 such that w. (Ai) converges to w. strongly. We first consider the following inequality which follows from the definition of w. (A): T A 2 A 2 EDf(w.(A) xy - 1) + '2W.(A) :S '2w. . (6) Therefore Ilw.(A)112 :s Ilw.112' It is well-known that every bounded sequence in a Hilbert space contains a weakly convergent subsequence (cf. Proposition 66.4 in [4]). Therefore within any sequence of A that converges to zero, there exists a subsequence Ai --t 0 such that W. (Ai) converges weakly. We denote the limit by w. Since f(W.(Af xy - 1) is dominated by Ilw.11211x112 + 1 which has a finite integral with respect to D, therefore from (6) and the Lebesgue dominated convergence theorem, we obtain 0= lim ED f(w. (AdT xy -1) = ED limf(w.(Aif xy - 1) = EDf(wT xy - 1). (7) , , Also note that IIwl12 :s liffii Ilw.(Ai)112 :s Ilw.112, therefore by the definition of W., we must have w = w •. Since W. is the weak limit of W.(Ai), we obtain Ilw.112 :s liffii Ilw.(Ai)112. Also since Ilw.(Ai)112 :s Ilw.112, therefore liffii Ilw.(AdI12 = Ilw.112' This equality implies that W. (Ai) converges to w. strongly since lim(w.(Ai) - W.)2 = limw.(Ai)2 + w; - 21imw.(Ai)Tw• = O. , , , 3.2 Accuracy of estimated hyperplane with non-zero regularization parameter Our goal is to show that for the estimation method (3) with a nonzero regularization parameter A > 0, the estimated parameter Wn(A) converges to the true parameter W.(A) in probability when the sample size n --t 00. Furthermore, we give a large deviation bound on the rate of convergence. From (4), we obtain the following first order condition: EDf3(A, x, y)xy + AW.(A) = 0, (8) where f3(A, x, y) = f'(W.(A)T xy - 1) and f'(z) E [-1,0] denotes a member of the subgradient of f at z [9].2 In the finite sample case, we can also interpret f3(\ x, y) in (8) as a scaled dual variable a: f3 = -a/C, where a appears in the dual (or Kernel) formulation of an SVM (for example, see chapter 10 of [11]). The convexity of f implies that f(zd + (Z2 - zdf'(zd :s f(Z2) for any subgradient f' of f. This implies the following inequality: ~ L f(W.(A)T xiyi - 1) + (Wn(A) - W.(A))T ~ Lf3(A, xi, yi)xiyi n . n . , , which is equivalent to: 1 '" T .. A 2 - ~ f(W.(A) x'y' - 1) + - W.(A) + n . 2 , (Wn(A) - W.(A)?[ ~ Lf3(\ xi, yi)xiyi + AW.(A)] + ~(W.(A) - Wn(A))2 n . 2 , 2Por readers not familiar with the sub gradient concept in convex analysis, our analysis requires little modification if we replace f with a smoother convex function such as P, which avoids the discontinuity in the first order derivative. Also note that by the definition of Wn(A), we have: ..!. " !(WnC>..)T xiyi - 1) + ~Wn(A)2 < ..!. " !(w*(Af xiyi -1) + ~W*(A)2. n~ 2 -n~ 2 , , Therefore by comparing the above two inequalities, we obtain: ~(W*(A) - Wn(A))2 «W*(A) - wn(A)f[..!. L,B(A, xi, yi)xiyi + AW*(A)] 2 n . , :Sllw*(A) - Wn(A)11211~ L,B(A, xi, yi)xiyi + AW*(A)112. i Therefore we have IIW*(A) - Wn(A)112 :S~II~ L,B(A, xi, yi)xiyi + AW*(A)112 i 2 1" .... =-11- ~,B(A, x', y')x'y' - ED,B(A, x, y)xYI12' An. , (9) Note that in (9), we have already bounded the convergence of Wn(A) to W*(A) in terms of the convergence of the empirical expectation of a random vector ,B( A, x, y)xy to its mean. In order to obtain a large deviation bound on the convergence rate, we need the following result which can be found in [13], page 95: Theorem 3.1 Let ei be zero-mean independent random vectors in a Hilbert space. If there exists M > 0 such thatforall natural numbers 12: 2: E~=l Elleill~ :S ";bl!Ml. Thenfor aILS> 0: P(II~ Ei eil12 2:.5):S 2exp(-~()2/(bM2 +.5M)). U sing the fact that ,B( A, x, y) E [-1, 0], it is easy to verify the following corollary by using Theorem 3.1 and (9), where we also bound the l-th moment of the right hand side of (9) using the following form ofJensen's inequality: la + bll :S 2l-1(lall + Ibll) forl 2: 2. Corollary 3.1 If there exists M > 0 such thatfor all natural numbers 1 2: 2: ED Ilxll~ :S %1!Ml. Thenfor all.5 > 0: P(llw*(.A) - wn(A)112 2: .5) :S 2 exp( -iA2.52 /(4bM2 + A.5M)). Let PD ( .) denote the probability with respect to distribution D, then the following bound on the expected misclassification error of the computed hyperplane Wn (A) is a straightforward consequence of Corollary 3.1: Corollary 3.2 Under the assumptions of Corollary 3.1, then for any non-random values A, ,,(, K > 0, we have: EXPD(Wn(Af xy:S 0) :SPD(w*(.A)T xy:S "() + PD (llxl12 2: K) + 2 exp( -iA2"{2 /( 4bK2 M2 + A"{K M)), where the expectation Ex is taken over n random samples from D with Wn (A) estimated from the n samples. We now consider linearly separable classification problems where the solution W* of (5) is finite. Throughout the rest of this section, we impose an additional assumption that the distribution D is finitely supported: IIxl12 :s M almost everywhere with respect to the measure D. From Section 3.1, we know that for any sufficiently small positive number A, Ilw. w.(A)112 < 11M, which means that W.(A) also separates the in-class data from the outof-class data with a margin of at least 2(1 - Mllw. - w. (A) 112). Therefore for sufficiently small A, we can define: I'(A) = sup{b: Pn(W.(A)T xy :s b) = O} ~ 1- Mllw. - w.(A)112 > O. By Corollary 3.2, we obtain the following upper-bound on the misclassification error if we compute a linear separator from (3) with a non-zero small regularization parameter A: Ex Pn( wn(Af xy :s 0) :s 2 exp( - ~A21'(A)2 1(4M4 + AI'(A)M2)). This indicates that the expected misclassification error of an appropriately computed hyperplane for a linearly separable problem is exponential in n. However, the rate of convergence depends on AI'( A) 1M2. This quantity is different than the margin concept which has been widely used in the literature to characterize the generalization behavior of a linear classification problem. The new quantity measures the convergence rate of W.(A) to w. as A -+ O. The faster the convergence, the more "robust" the linear classification problem is, and hence the faster the exponential decay of misclassification error is. As we shall see in the next section, this "robustness" is related to the degree of outliers in the problem. 4 Example We give an example to illustrate the "robustness" concept that characterizes the exponential decay of misclassification error. It is known from Vapnik's cross-validation bound in [11] (Theorem 10.7) that by using the large margin idea alone, one can derive an expected misclassification error bound that is of the order O(l/n), where the constant is margin dependent. We show that this bound is tight by using the following example. Example 4.1 Consider a two-dimensional problem. Assume that with probability of 1-1', we observe a data point x with label y such that xy = [1, 0]; and with probability of 1', we observe a data point x with label y such that xy = [-1, 1]. This problem is obviously linearly separable with a large margin that is I' independent. Now, for n random training data, with probability at most I'n + (1- I')n, we observe either xiyi = [1,0] for all i = 1, . .. , n, or xiyi = [-1,1] for all i = 1, ... , n. For all other cases, the computed optimal hyperplane Wn = w •. This means that the misclassification error is 1'(1 - I')("Yn-l + (1 - I')n-l). This error converges to zero exponentially as n -+ 00. However the convergence rate depends on the fraction of outliers in the distribution characterized by 1'. In particular, for any n, if we let I' = 1 In, then we have an expected misclassification error that is at least ~(l-l/n)n ~ 1/(en). D The above tightness construction of the linear decay rate of the expected generalization error (using the margin concept alone) requires the scenario that a small fraction (which shall be in the order of inverse sample size) of data are very different from other data. This small portion of data can be considered as outliers, which can be measured by the "robustness" of the distribution. In general, w. (A) converges to w. slowly when there exist such a small portion of data (outliers) that cannot be correctly classified from the observation of the remaining data. It can be seen that the optimal hyperplane in (1) is quite sensitive to even a single outlier. Intuitively, this instability is quite undesirable. However, the previous large margin learning bounds seemed to have dismissed this concern. This paper indicates that such a concern is still valid. In the worst case, even if the problem is separable by a large margin, outliers can still cause a slow down of the exponential convergence rate. 5 Conclusion In this paper, we derived new generalization bounds for large margin linearly separable classification. Even though we have only discussed the consequence of this analysis for separable problems, the technique can be easily applied to non separable problems (see Corollary 3.2). For large margin separable problems, we show that exponential decay of generalization error may be achieved with an appropriately chosen regularization parameter. However, the bound depends on a quantity which characterizes the robustness of the distribution. An important difference of the robustness concept and the margin concept is that outliers may not be observable with large probability from data while margin generally will. This implies that without any prior knowledge, it could be difficult to directly apply our bound using only the observed data. References [1] lK. Anlauf and M. Biehl. The AdaTron: an adaptive perceptron algorithm. Europhys. Lett., 10(7):687-692, 1989. [2] C. Cortes and V.N. Vapnik. Support vector networks. Machine Learning, 20:273-297, 1995. [3] Nello Cristianini and John Shawe-Taylor. An Introduction to Support Vector Machines and other Kernel-based Learning Methods. Cambridge University Press, 2000. [4] Harro G. Heuser. Functional analysis. John Wiley & Sons Ltd., Chichester, 1982. Translated from the German by John Horvath, A Wiley-Interscience Publication. [5] W. Kinzel. Statistical mechanics of the perceptron with maximal stability. In Lecture Notes in Physics, volume 368, pages 175-188. Springer-Verlag, 1990. [6] 1 Kivinen and M.K. Warmuth. Additive versus exponentiated gradient updates for linear prediction. Journal of Infonnation and Computation, 132:1-64, 1997. [7] M. Opper. Learning times of neural networks: Exact solution for a perceptron algorithm. Phys. Rev. A, 38(7):3824-3826, 1988. [8] M. Opper. Learning in neural networks: Solvable dynamics. Europhysics Letters, 8(4):389-392,1989. [9] R. Tyrrell Rockafellar. Convex analysis. Princeton University Press, Princeton, NJ, 1970. [10] Dale Schuurmans. Characterizing rational versus exponential learning curves. J. Comput. Syst. Sci., 55:140-160, 1997. [11] V.N. Vapnik. Statistical learning theory. John Wiley & Sons, New York, 1998. [12] Robert C. Williamson, Alexander 1 Smola, and Bernhard Scholkopf. Entropy numbers of linear function classes. In COLT'OO, pages 309-319,2000. [13] Vadim Yurinsky. Sums and Gaussian vectors. Springer-Verlag, Berlin, 1995. [14] Tong Zhang. Analysis of regularized linear functions for classification problems. Technical Report RC-21572, IBM, 1999. Abstract in NIPS'99, pp. 370-376.
2000
16
1,813
Emergence of movement sensitive neurons' properties by learning a sparse code for natural moving images Rafal Bogacz Dept. of Computer Science University of Bristol Bristol BS8 lUB, U.K. R.Bogacz@bri.l'fol.ac.uk Malcolm W. Brown Dept. of Anatomy University of Bristol Bristol BS8 lTD, U.K. Christophe Giraud-Carrier Dept. of Computer Science University of Bristol Bristol BS8 lUB, U.K. cgc@c.l'.bri.l'.ac.uk M. W.Brown@bri.l'fol.ac.uk Abstract Olshausen & Field demonstrated that a learning algorithm that attempts to generate a sparse code for natural scenes develops a complete family of localised, oriented, bandpass receptive fields, similar to those of 'simple cells' in VI. This paper describes an algorithm which finds a sparse code for sequences of images that preserves information about the input. This algorithm when trained on natural video sequences develops bases representing the movement in particular directions with particular speeds, similar to the receptive fields of the movement-sensitive cells observed in cortical visual areas. Furthermore, in contrast to previous approaches to learning direction selectivity, the timing of neuronal activity encodes the phase of the movement, so the precise timing of spikes is crucially important to the information encoding. 1 Introduction It was suggested by Barlow [3] that the goal of early sensory processing is to reduce redundancy in sensory information and the activity of sensory neurons encodes independent features. Neural modelling can give some insight into how these neural nets may learn and operate. Atick & Redlich [1] showed that training a neural network on patches of natural images, aiming to remove pair-wise correlation between neuronal responses, results in neurons having centre-surround receptive fields resembling those of retinal ganglion neurons. Olshausen & Field [11,12] demonstrated that a learning algorithm that attempts to generate a sparse code for natural scenes while preserving information about the visual input, develops a complete family of localised, oriented, bandpass receptive fields, similar to those of simple-cells in VI. The activities of the neurons implementing this coding signal the presence of edges, which are basic components of natural images. Olshausen & Field chose their algorithm to create a sparse representation because it possesses a higher degree of statistical independence among its outputs [11]. Similar receptive fields were also obtained by training a neural net so as to make the responses of neurons as independent as possible [4]. Other authors [14,16,5] have shown that direction selectivity of the simple-cells may also emerge from unsupervised learning. However, there is no agreed way of how the receptive fields of neurons that encode movements are created. This paper describes an algorithm which finds a sparse code for sequences of images that preserves the critical information about the input. This algorithm, trained on natural video images, develops bases representing movements in particular directions at particular speeds, similar to the receptive fields of the movement-sensitive cells observed in early visual areas [9,2]. The activities of the neurons implementing this encoding signal the presence of edges moving with certain speeds in certain directions, with each neuron having its preferred speed and direction. Furthermore, in contrast to all the previous approaches, the timing of neural activity encodes the movement's phase, so the precise timing of spikes is crucially important for information coding. The proposed algorithm is an extension of the one proposed by Olshausen & Field. Hence it is a high level algorithm, which cannot be directly implemented in a biologically plausible neural network. However, a plausible neural network performing a similar task can be developed. The proposed algorithm is described in Section 2. Sections 3 and 4 show the methods and the results of simulations. Finally, Section 5 discusses how the algorithm differs from the previous approaches, and the implications of the presented results. 2 Description of the algorithm Since the proposed algorithm is an extension of the one described by Olshausen & Field [11,12], this section starts with a brief introduction of the main ideas of their algorithm. They assume that an image x can be represented in terms of a linear superposition of basis functions Ai. For clarity of notation, let us represent both images and bases as vectors created by concatenating rows of pixels as shown in Figure 1, and let each number in the vector describe the brightness of the corresponding pixel. Let the basis functions Ai form the columns of a matrix A . Let the weighting of the above mentioned linear superposition (which changes from one image to the next) be given by a vector s: x=As (1) The image x may be encoded, for example using the inverted transformation where it exists. Hence, the image code s is determined by the choice of basis functions Ai. Olshausen & Field [11,12] try to find bases that result in a code s that preserves information about the original image x and that is sparse. Therefore, they minimise the following cost function with respect to A, where A, denotes a constant determining the importance of sparseness [11]: E = -[preserved information in s about x] - A,[sparseness of s] (2) The algorithm proposed in this paper is similar, but it takes into consideration the temporal order of images. Let us divide time into intervals (to be able to treat it as discrete) and denote the image observed at time t and the code generated by xt and st, respectively. The Olshausen & Field algorithm assumes that image x is a linear superposition (mixture) of s. By contrast, our algorithm assumes that images are convolved mixtures of s, i.e., st depends not only on xt but also on xt-l, xt-2, ... , Xt-(T-l) (i.e. Sl depends on T preceding Xl). Therefore, each basis function may also be image ~ ...... • I I I I I • I xT Figure 1: Representing images as vectors. EE§Em ~ ~ EE§EE§ Xl , X3 X4 X5 X6 XEE§ EE§ EE§ I A/ A/ A I O .I'll .1'/ .I} .1'14 .I} .I} ~~~ .1'21 .1'/ .I} .1'24 .1'/ .1'26 Figure 2: Encoding of an image sequence. In the example, there are two basis functions, each described by T = 3 vectors. The first basis encodes movement to the right, the second encodes movement down. A sequence x of 6 images is shown on the top and the corresponding code s below. A "spike" over a coefficient .1'/ denotes that .1'/ = 1, the absence of a "spike" denotes .1'/ = o. represented as a sequence of vectors AiO, Ail, ... , Ar l (corresponding to a sequence of images). These vectors create columns of the mixing matrices A 0, A I, ... , AIel. Each coefficient .1'/ describes how strongly the basis function Ai is present in the last T images. This relationship is illustrated in Figure 2 and is expressed by Equation 3. T-I x' = [.Afsf+1 (3) f=O In the proposed algorithm, the basis functions A are also found by optimising the cost function of Equation 2. The detailed method of this minimisation is described below, and this paragraph gives its overview. In each optimisation step, a sequence x of P image patches is selected from a random position in the video sequence (P 2: 2D. Each of the optimisation steps consists of two operations. Firstly, the sequence of coefficient vectors s which minimises the cost function E for the images x is found. Secondly, the basis matrices A are modified in the direction opposite to the gradient of E over A, thus minimising the cost function. These two operations are repeated for different sequences of image patches. In Equation 2, the term "preserved information in s about x" expresses how weJl x may be reconstructed on the basis of s. In particular, it is defined as the negative of the square of the reconstruction error. The reconstruction error is the difference between the original image sequence x and the sequence of images r reconstructed from s. The sequence r may be reconstructed from s in the foJlowing way: T- I r' = [.A fsf+1 (4) f=O The precise definition of the cost function is then given by: P-T+I P [ .t ) E = ~ ~ (x~ - r; Y + A ~ ~ C ~ (5) In Equation 5, C is a nonlinear function, and (j is a scaling constant. Images at the start and end of the sequence (e.g., Xl, xP) may share some bases with images not in the sequence (e.g., xO, x · l , XP+I). To avoid this problem, only the middle images are reconstructed and only for them is the reconstruction error computed in the cost function. In particular, only images from T to P-T+l are reconstructed - since the assumed length of the bases is T, those images contain only the bases whose other parts are also contained in the sequence. Since only images from T to P-T+1 are reconstructed, it is clear from Equation 4, that only coefficients ST to sP need to be found. These considerations explain the limits of the outer summations in both terms of Equation 5. For each image sequence, in the first operation, the coefficients ST, ST+!, ... , sP minimising E are found using an optimisation method. Minus the gradient of E over s is given by: -~ = 2"""" (Xl - r' ),4,'-1 -~ctS( 1 chi i... i... J J 1 (J" (J" • i t j (6) In the second operation, the bases A are modified so as to minimise E: (7) In equation 7, 17 denotes the learning rate. The vector length of each basis function Ai is adapted over time so as to maintain equal variance on each coefficient s, m exactly the same way as described in [12]. 3 Methods of simulations The proposed algorithm was implemented in Matlab except for finding s minimising E, which was implemented in C++, using the conjugate gradient method for the sake of speed. In the implementation, the original codes of Olshausen & Field were used and modified (downloaded from http://redwood.ucdavis.edu/bruno/sparsenet.html). Many parameters of the proposed algorithm were taken from [11]. In particular, C(x) = In(1+x2), cris the standard deviation of pixels' colours in the images, A is set up such that A/cr = 0.14, and 17 = 1. ~A is averaged over 100 image sequences, and hence the bases A are updated with the average of ~A every 100 optimisation steps. The length of an image sequence P is set up such that P = 3T. The proposed algorithm was tested on two types of video sequences: 'toy' problems and natural video sequences. Each of the toy sequences consisted of 10 frames 100x100 pixels. In the sequence, there were 20 moving lines. Each line was either horizontal or vertical and 1 pixel thick. Each line was either black or white, which corresponded to positive or negative values of the elements of x vectors (the grey background corresponded to zero). Each horizontal line moved up or down, each vertical - left or right, with the speed of one pixel per frame. Then the algorithm was tested on five natural video sequences showing moving people or animals. In each optimisation step, a sequence of image patches was selected from a randomly chosen video. The video sequences were preprocessed. First, to remove the static aspect of the images, from each frame the previous one was subtracted, i.e., each image encoded the difference between two successive frames of the video. This simple operation reduces redundancy in data since the corresponding pixels in the successive frames tend to have similar colours. An analogous operation may be performed by the retina, since the ganglion cells typically respond to the changes in light intensity [10]. Then, to remove the pair-wise correlation between pixels of the same frame, Zerophase Component Analysis (ZCA) [4] was applied to each of the patches from the selected sequence, i.e., x' := W x', where W = (X'(X'?)-I> i.e., W is equal to the inverted square root of the covariance matrix of x. The filters in W have centresurround receptive fields resembling those of retinal ganglion neurons [4].
2000
17
1,814
High-temperature expansions for learning models of nonnegative data Oliver B. Downs Dept. of Mathematics Princeton University Princeton, NJ 08544 obdown s@p r incet on.edu Abstract Recent work has exploited boundedness of data in the unsupervised learning of new types of generative model. For nonnegative data it was recently shown that the maximum-entropy generative model is a Nonnegative Boltzmann Distribution not a Gaussian distribution, when the model is constrained to match the first and second order statistics of the data. Learning for practical sized problems is made difficult by the need to compute expectations under the model distribution. The computational cost of Markov chain Monte Carlo methods and low fidelity of naive mean field techniques has led to increasing interest in advanced mean field theories and variational methods. Here I present a secondorder mean-field approximation for the Nonnegative Boltzmann Machine model, obtained using a "high-temperature" expansion. The theory is tested on learning a bimodal 2-dimensional model, a high-dimensional translationally invariant distribution, and a generative model for handwritten digits. 1 Introduction Unsupervised learning of generative and feature-extracting models for continuous nonnegative data has recently been proposed [1], [2]. In [1], it was pointed out that the maximum entropy distribution (matching Ist- and 2nd-order statistics) for continuous nonnegative data is not Gaussian, and indeed that a Gaussian is not in general a good approximation to that distribution. The true maximum entropy distribution is known as the Nonnegative Boltzmann Distribution (NNBD), (previously the rectified Gaussian distribution [3]), which has the functional form p(x) = {o~exp[-E(X)] if Xi ~ OVi, (1) if any Xi < 0, where the energy function E(x) and normalisation constant Z are: E(x) (3xT Ax - bT X, Z = ( dx exp[-E(x)]. 10;"20 (2) (3) In contrast to the Gaussian distribution, the NNBD can be multimodal in which case its modes are confined to the boundaries of the nonnegative orthant. The Nonnegative Boltzmann Machine (NNBM) has been proposed as a method for learning the maximum likelihood parameters for this maximum entropy model from data. Without hidden units, it has the stochastic-EM learning rule: (XiXj)f - (XiXj)c (Xi)c - (Xi)r, (4) (5) where the subscript "c" denotes a "clamped" average over the data, and the subscript "f" denotes a "free" average over the NNBD: 1 M (f(x))c = M L f(x(I')) 1'=1 (6) (f(X))f = 1 dxp(x)f(x). x~O (7) This learning rule has hitherto been extremely computationally costly to implement, since naive variationaVmean-field approximations for (XXT)r are found empirically to be poor, leading to the need to use Markov chain Monte Carlo methods. This has made the NNBM impractical for application to high-dimensional data. While the NNBD is generally skewed and hence has moments of order greater than 2, the maximum-likelihood learning rule suggests that the distribution can be described solely in terms of the Ist- and 2nd-order statistics of the data. With that in mind, I have pursued advanced approximate models for the NNBM. In the following section I derive a second-order approximation for (XiXj)r analogous to the TAP-On sager correction for the mean-field Ising Model, using a high temperature expansion, [4]. This produces an analytic approximation for the parameters Aij , bi in terms of the mean and cross-correlation matrix of the training data. 2 Learning approximate NNBM parameters using high-temperature expansion Here I use Taylor expansion of a "free energy" directly related to the partition function of the distribution, Z in the fJ = 0 limit, to derive a second-order approximation for the NNBM model parameters. In this free energy we embody the constraint that Eq. 5 is satisfied: where fJ is an "inverse temperature". There is a direct relationship between the "free energy", G and the normalisation, Z of the NNBD, Eq. 3. -In Z = G(fJ, m) + Constant(b, m) (9) Thus, (10) The Lagrange multipliers, Ai embody the constraint that (Xi)f match the mean field of the patterns, mi = (x)c. This effectively forces tl.b = 0 in Eq. 5, with bi = -Ai((3). Since the Lagrange constraint is enforced for all temperatures, we can solve for the specific case (3 = O. TIk Ixoo =0 Xi exp (- L:l Al(O)(XI - ml)) dXk 1 mi = (Xi)fl.8-o = hOO = -(11) TIk IXh=o exp (- L:l Al (0) (Xl - ml)) dXk Ai(O) Note that this embodies the unboundedness of Xk in the nonnegative orthant, as compared to the equivalent term of Georges & Yedidia for the Ising model, mi = tanh(Ai(O)). We consider Taylor expansion of Eq. 8 about the "high temperature" limit, (3 = O. 8G I (32 82G I G((3, m) = G(O, m) + (3 8(3 + 2' 8(32 + ... .8=0 .8=0 (12) Since the integrand becomes factorable in Xi in this limit, the infinite temperature values of G and its derivatives are analytically calculable. G((3,m)I.8=o = - Lin {OO_ exp (- LAi(O)(Xi -mi)) dXk (13) k }Xh-O i using Eq. 11; G((3,m)I.8=o = - ~ln (Ak~O) exp (~Ai(O)mi)) The first derivative is then as follows =N+ Llnmk k (14) 8GI TIk 1000 (L:i.j -AijXiXj - L:i(Xi - mi) ¥t) exp (- L:l Am(O)(XI - ml)) dXk 8(3 .8=0 TIk 1000 exp (- L:l Am(O)(XI - ml)) dXk (15) (16) i,j This term is exactly the result of applying naive mean-field theory to this system, as in [1]. Likewise we obtain the second derivative ~~~ Ip~o ~ - ( (~A';X'X;) ') + (pi + O';)A,;m,m;) , .8=0 + (~AijXiXj L ~; (Xk - mk)) t,} k .8=0 (17) = - L L Qijkl Aij Aklmimjmkml (18) i,j k,l Where Qijkl contains the integer coefficients arising from integration by parts in the first and second terms and (1 + Oij) in the second term of Eq. 17. This expansion is to the same order as the TAP-Onsager correction term for the Ising model, which can be derived by an analogous approach to the equivalent free-energy [4]. Substituting these results into Eq. 10, we obtain (32 (3(XiXj)f R! (3(1 + Oij)mimj - 2' L QijklAklmimjmkml (19) kl We arrive at an analytic approximation for Aij as a function of the 1st and 2nd moments of the data, using Eq. 19 in the learning rule, Eq. 4, setting ~Aij = 0 and solving the linear equation for A. We can obtain an equivalent expansion for Ai ((3) and hence bi. To first order in (3 (equivalent to the order of (3 in the approximation for A), we have Using Eqs. 11 & 15 Hence 8A·1 Ai((3) ~ Ai(O) + (3 8; + ... P /3=0 = - 2:(1 + c5ij )Aijmj j (20) (21) (22) (23) (24) The approach presented here makes an explicit approximation of the statistics required for the NNBM learning rule (xxT}f' which can be substituted in the fixed-point equation Eq. 4, and yields a linear equation in A to be solved. This is in contrast to the linear response theory approach of Kappen & Rodriguez [6] to the Boltzmann Machine, which exploits the relationship 82 1nZ 8bi8bj = (XiXj) - (Xi) (Xj) = Xij (25) between the free energy and the covariance matrix X of the model. In the learning problem, this produces a quadratic equation in A, the solution of which is non-trivial. Computationally efficient solutions of the linear response theory are then obtained by secondary approximation of the 2nd-order term, compromising the fidelity of the model. 3 Learning a 'Competitive' Nonnegative Boltzmann Distribution A visualisable test problem is that of learning a bimodal NNBD in 2 dimensions. MonteCarlo slice sampling (See [1] & [5]) was used to generate 200 samples from a NNBD as shown in Fig. l(a). The high temperature expansion was then used to learn approximate parameters for the NNBM model of this data. A surface plot of the resulting model distribution is shown in Fig. l(b), it is clearly a valid candidate generative distribution for the data. This is in strong contrast with a naive mean field ((3 = 0) model, which by construction would be unable to produce a multiple-peaked approximation, as previously described, [1] . 4 Orientation Tuning in Visual Cortex - a translationally invariant model The neural network model of Ben-Yishai et. al [7] for orientation-tuning in visual cortex has the property that its dynamics exhibit a continuum of stable states which are trans(a) 8 ~ 15 6 >-4 ~ 'iii c ><.~10 2to >== :c 5 co o 0 .c 0 0 o oo~ Jiil.. ... -"" Q. 0 2 4 6 8 x2 Figure 1: (a) Training data, generated from 2-dimensional 'competitive' NNBD, (b) Learned model distribution, under the high temperature expansion. lationally invariant across the network. The energy function of the network model is a translationally invariant function of the angles of maximal response, Bi , of the N neurons, and can be mapped directly onto the energy of the NNBM, as described in [1]. Aii=1'(c5ii + ~- ~COS(~li-jl)),bi=1' (26) We can generate training data for the NNBM by sampling from the neural network model with known parameters. It is easily shown that Aii has 2 equal negative eigenvalues, the remainder being positive and equal in value. The corresponding pair of eigenvectors of A are sinusoids of period equal to the width of the stable activation bumps of the network, with a small relative phase. Here, the NNBM parameters have been solved using the high-temperature expansion for training data generated by Monte Carlo slice-sampling [5] from a lO-neuron model with parameters to = 4, I' = 100 in Eq. 26. Fig. 2 illustrates modal activity patterns of the learned NNBM model distribution, found using gradient ascent of the log-likelihood function from a random initialisation of the variables. ~x ex [-Ax + bj+ (27) where the superscript + denotes rectification. These modes of the approximate NNBM model are highly similar to the training patterns, also the eigenvectors and eigenvalues of A exhibit similar properties between their learned and training forms. This gives evidence that the approximation is successful in learning a high-dimensional translationally invariant NNBM model. 5 Generative Model for Handwritten Digits In figure 3, I show the results of applying the high-temperature NNBM to learning a generative model for the feature coactivations of the Nonnegative Matrix Factorization [2] 6 6 Q) ~ rn4 0: 0: 4 OJ OJ c:: c:: .;:: .;:: U:2 u: 2 0 O~ 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 Neuron Number Neuron Number (a) (b) 0.4 Q) 0.2 ~ rn 0: 0: OJ 0 c:: .;:: u: -0.2 2 4 6 8 10 2 4 6 8 10 Neuron Number Neuron Number Figure 2: Upper: 2 modal states of the NNBM model density, located by gradient-ascent of the log-likelihood from different random initialisations, Lower: The two negativeeigenvalue eigenvectors of A - a) in the learned model, and b) as used to generate the training data. decomposition of a database of the handwritten digits, 0-9. This problem contains none of the space-filling symmetry of the visual cortex model, and hence requires a more strongly multimodal generative model distribution to generate distinct digits. Here performance is poor, although superior to uniformly-sampled feature activitations. 6 Discussion In this work, an approximate technique has been derived for directly determining the NNBM parameters A, b in terms of the Ist- and 2nd-order statistics of the data, using the method of high-temperature expansion. To second order this produces corrections to the naive mean field approximation of the system analogous to the TAP term for the Ising Model/Boltzmann Machine. The efficacy of this approximation has been demonstrated in the pathological case of learning the 'competitive' NNBD, learning the translationally invariant model in 10 dimensions, and a generative model for handwritten digits. These results demonstrate an improvement in approximation to models in this class over a naive mean field ((3 = 0) approach, without reversion to secondary assumptions such as those made in the linear response theory for the Boltzmann Machine. There is strong current interest in the relationship between TAP-like mean field theory, variational approximation and belief-propagation in graphical models with loops. All of these can be interpreted in terms of minimising an effective free energy of the system [8]. The distinction in the work presented here lies in choosing optimal approximate statistics to learn the true model, under the assumption that satisfaction of the fixed-point equations of the true model optimises the free energy. This compares favourably with variational a) b) Figure 3: Digit images generated with feature activations sampled from a) a uniform distribution, and b) a high-temperature NNBM model for the digits. approaches which directly optimise an approximate model distribution. Methods of this type fail when they add spurious fixed points to the learning dynamics. Future work will focus on understanding the origins of such fixed points, and the regimes in which they lead to a poor approximation of the model parameters. 7 Acknowledgements This work was inspired by the NIPS 1999 Workshop on Advanced Mean Field Methods. The author is especially grateful to David MacKay and Gayle Wittenberg for comments on early versions of this manuscript. I also acknowledge guidance from John Hopfield and David Heckerman, detailed discussion with Bert Kappen, Daniel Lee and David Barber and encouragement from Kim Midwood. References [1] Downs, DB, MacKay, DJC, & Lee, DD (2000). The Nonnegative Boltzmann Machine. Advances in Neural Information Processing Systems 12, 428-434. [2] Lee, DD, and Seung, HS (1999) Learning the parts of objects by non-negative matrix factorization. Nature 401,788-791. [3] Socci, ND, Lee, DD, and Seung, HS (1998). The rectified Gaussian distribution. Advances in Neural Information Processing Systems 10, 350-356. [4] Georges, A, & Yedidia, JS (1991). How to expand around mean-field theory using hightemperature expansions. Journal of Physics A 24, 2173- 2192. [5] Neal, RM (1997). Markov chain Monte Carlo methods based on 'slicing' the density function. Technical Report 9722, Dept. of Statistics, University of Toronto. [6] Kappen, HJ & Rodriguez, FB (1998). Efficient learning in Boltzmann Machines using linear response theory. Neural Computation 10, 1137-1156. [7] Ben-Yishai, R, Bar-Or, RL, & Sompolinsky, H (1995). Theory of orientation tuning in visual cortex. Proc. Nat. Acad. Sci. USA,92(9):3844-3848. [8] Yedidia, JS, Freeman, WT, & Weiss, Y (2000). Generalized Belief Propagation. Mitsubishi Electric Research Laboratory Technical Report, TR-2000-26.
2000
18
1,815
A Support Vector Method for Clustering AsaBen-Hur Faculty of IE and Management Technion, Haifa 32000, Israel Hava T. Siegelmann Lab for Inf. & Decision Systems MIT Cambridge, MA 02139, USA David Horn School of Physics and Astronomy Tel Aviv University, Tel Aviv 69978, Israel Vladimir Vapnik AT&T Labs Research 100 Schultz Dr., Red Bank, NJ 07701, USA Abstract We present a novel method for clustering using the support vector machine approach. Data points are mapped to a high dimensional feature space, where support vectors are used to define a sphere enclosing them. The boundary of the sphere forms in data space a set of closed contours containing the data. Data points enclosed by each contour are defined as a cluster. As the width parameter of the Gaussian kernel is decreased, these contours fit the data more tightly and splitting of contours occurs. The algorithm works by separating clusters according to valleys in the underlying probability distribution, and thus clusters can take on arbitrary geometrical shapes. As in other SV algorithms, outliers can be dealt with by introducing a soft margin constant leading to smoother cluster boundaries. The structure of the data is explored by varying the two parameters. We investigate the dependence of our method on these parameters and apply it to several data sets. 1 Introduction Clustering is an ill-defined problem for which there exist numerous methods [1, 2]. These can be based on parametric models or can be non-parametric. Parametric algorithms are usually limited in their expressive power, i.e. a certain cluster structure is assumed. In this paper we propose a non-parametric clustering algorithm based on the support vector approach [3], which is usually employed for supervised learning. In the papers [4, 5] an SV algorithm for characterizing the support of a high dimensional distribution was proposed. As a by-product of the algorithm one can compute a set of contours which enclose the data points. These contours were interpreted by us as cluster boundaries [6]. In [6] the number of clusters was predefined, and the value of the kernel parameter was not determined as part of the algorithm. In this paper we address these issues. The first stage of our Support Vector Clustering (SVC) algorithm consists of computing the sphere with minimal radius which encloses the data points when mapped to a high dimensional feature space. This sphere corresponds to a set of contours which enclose the points in input space. As the width parameter of the Gaussian kernel function that represents the map to feature space is decreased, this contour breaks into an increasing number of disconnected pieces. The points enclosed by each separate piece are interpreted as belonging to the same cluster. Since the contours characterize the support of the data, our algorithm identifies valleys in its probability distribution. When we deal with overlapping clusters we have to employ a soft margin constant, allowing for "outliers". In this parameter range our algorithm is similar to the space clustering method [7]. The latter is based on a Parzen window estimate of the probability density, using a Gaussian kernel and identifying cluster centers with peaks of the estimator. 2 Describing Cluster Boundaries with Support Vectors In this section we describe an algorithm for representing the support of a probability distribution by a finite data set using the formalism of support vectors [5, 4]. It forms the basis of our clustering algorithm. Let {xd ~ X be a data-set of N points, with X ~ Rd, the input space. Using a nonlinear transformation <I> from X to some high dimensional featurespace, we look for the smallest enclosing sphere of radius R, described by the constraints: 11<I>(xi) - aW ~ R2 'Vi , where II . II is the Euclidean norm and a is the center of the sphere. Soft constraints are incorporated by adding slack variables ~j: (1) with ~j ~ O. To solve this problem we introduce the Lagrangian L = R2 - 2:(R2 + ~j -11<I>(Xj) - aI1 2),Bj 2:~j{tj + C2:~j , (2) j where,Bj ~ 0 and {tj ~ 0 are Lagrange multipliers, C is a constant, and C L: ~j is a penalty term. Setting to zero the derivative of L with respect to R, a and ~j, respectively, leads to a = 2: ,Bj <I> (Xj ), j ,Bj = C - {tj The KKT complementarity conditions [8] result in ~j{tj = 0 (R2 +~j -11<I>(xj) - aI1 2),Bj = 0 (3) (4) (5) (6) (7) A point Xi with ~i > 0 is outside the feature-space sphere (cf. equation 1). Equation (6) states that such points Xi have {ti = 0, so from equation (5) ,Bi = C. A point with ~i = 0 is inside or on the surface of the feature space sphere. If its ,Bi i= 0 then equation 7 implies that the point Xi is on the sudace of the feature space sphere. In this paper any point with o < ,Bi < C will be referred to as a support vector or SV; points with ,Bi = C will be called bounded support vectors or bounded SVs. This is to emphasize the role of the support vectors as delineating the boundary. Note that when C ~ 1 no bounded SVs exist because of the constraint L:,Bi = 1. Using these relations we may eliminate the variables R, a and {tj, turning the Lagrangian into the Wolfe dual which is a function of the variables ,Bj: (8) j i ,j Since the variables {lj don't appear in the Lagrangian they may be replaced with the constraints: (9) We follow the SV method and represent the dot products <Ii (Xi) . <Ii(Xj) by an appropriate Mercer kernel K (Xi, Xj). Throughout this paper we use the Gaussian kernel K(Xi,Xj) = e-qllxi-XjI12 , (10) with width parameter q. As noted in [5], polynomial kernels do not yield tight contour representations of a cluster. The Lagrangian W is now written as: (11) j i,j At each point x we define its distance, when mapped to feature space, from the center of the sphere: (12) In view of (4) and the definition of the kernel we have: R2(X) = K(x, x) - 2 L .8jK(xj, x) + L .8i.8jK(Xi, Xj) . (13) j i ,j The radius of the sphere is: R = {R(Xi) I Xi is a support vector} . (14) In practice, one takes the average over all support vectors. The contour that encloses the cluster in data space is the set {X I R(x) = R} . (15) A data point Xi is a bounded SV if R(Xi) > R. Note that since we use a Gaussian kernel for which K (x, x) = 1, our feature space is a unit sphere; thus its intersection with a sphere of radius R < 1 can also be defined as an intersection by a hyperplane, as in conventional SVM. The shape of the enclosing contours in input space is governed by two parameters, q and C. Figure 1 demonstrates that, as q is increased, the enclosing contours form tighter fits to the data. Figure 2 describes a situation that necessitated introduction of outliers, or bounded SV, by allowing for C < 1. As C is decreased not only does the number of bounded SVs increase, but their influence on the shape of the cluster contour decreases (see also [6]). The number of support vectors depends on both q and C. For fixed q, as C is decreased, the number of SVs decreases since some of them turn into bounded SVs and the resulting shapes of the contours become smoother. We denote by nsv, nbsv the number of support vectors and bounded support vectors, respectively, and note the following result: Proposition 2.1 [4] nbsv + nsv ~ l/C, nbsv < l/C (16) This is an immediate consequence of the constraints (3) and (9). In fact, we have found empirically that nbsv(q, C) = max(O, l/C - no) , (17) where no > 0 may be a function of q and N. This was observed for artificial and real data sets. Moreover, we have also observed that nsv = a/C + b , (18) where a and b are functions of q and N. The linear behavior of nbsv continues until nbsv + nsv = N. 3 Support Vector Clustering (SVC) In this section we go through a set of examples demonstrating the use of SVC. We begin with a data set in which the separation into clusters can be achieved without outliers, i.e. C = 1. As seen in Figure 1, as q is increased the shape of the boundary curves in data-space varies. At several q values the enclosing contour splits, forming an increasing number of connected components. We regard each component as representing a single cluster. While in this example clustering looks hierarchical, this is not strictly true in general. -D.lL, --~~~,--­ '" -D_~.';, ---;~::;, ---;; I" Figure 1: Data set contains 183 points. A Gaussian kernel was used with C = 1.0. SVs are surrounded by small circles. (a): q = 1 (b): q = 20 (c): q = 24 (d): q = 48. In order to label data points into clusters we need to identify the connected components. We define an adjacency matrix Aij between pairs of points Xi and Xj: A-. - {I iffor all y on the line segment connecting xiand Xj R(y) ~ R (19) OJ a otherwise. Clusters are then defined as the connected components of the graph induced by A. This labeling procedure is justified by the observation that nearest neighbors in data space can be connected by a line segment that is contained in the high dimensional sphere. Checking the line segment is implemented by sampling a number of points on the segment (a value of 10 was used in the numerical experiments). Note that bounded SVs are not classified by this procedure; they can be left unlabeled, or classified e.g., according to the cluster to which they are closest to. We adopt the latter approach. The cluster description algorithm provides an estimate of the support of the underlying probability distribution [4]. Thus we distinguish between clusters according to gaps in the support of the underlying probability distribution. As q is increased the support is characterized by more detailed features, enabling the detection of smaller gaps. Too high a value of q may lead to overfitting (see figure 2(a», which can be handled by allowing for bounded SVs (figure 2(b»: letting some of the data points be bounded SVs creates smoother contours, and facilitates contour splitting at low values of q. 3.1 Overlapping clusters In many data sets clusters are strongly overlapping, and clear separating valleys as in Figures 1 and 2 are not present. Our algorithm is useful in such cases as well, but a slightly different interpretation is required. First we note that equation (15) for the enclosing contour can be expressed as {x I 'E,J3iK(Xi,X) = p}, where p is determined by the value of this sum on the support vectors. The set of points enclosed by the contour is: (9 ) ( b ) Figure 2: Clustering with and without outliers. The inner cluster is composed of 50 points generated by a Gaussian distribution. The two concentric rings contain 150/300 points, generated by a uniform angular distribution and radial Gaussian distribution. (a) The rings cannot be distinguished when C = 1. Shown here is q = 3.5, the lowest q value that leads to separation of the inner cluster. (b) Outliers allow easy clustering. The parameters are l/(NC) = 0.3 and q = 1.0. SVs are surrounded by small ellipses. {X I 2:i f3i K (Xi, x) > p} . In the extreme case when almost all data points are bounded SVs, the sum in this expression is approximately 1 p(x) = N LK(Xi,X). (20) i This is recognized as a Parzen window estimate of the density function (up to a normalization factor, if the kernel is not appropriately normalized). The contour will then enclose a small number of points which correspond to the maximum of the Parzen-estimated density. Thus in the high bounded SVs regime we find a dense core of the probability distribution. In this regime our algorithm is closely related to an algorithm proposed by Roberts [7]. He defines cluster centers as maxima of the Parzen window estimate p(x). He shows that in his approach, which goes by the name of scale-space clustering, as q is increased the number of maxima increases. The Gaussian kernel plays an important role in his analysis: it is the only kernel for which the number of maxima (hence the number of clusters) is a monotonically non-decreasing function of q (see [7] and references therein). The advantage of SVC over Roberts' method is that we find a region, rather than just a peak, and that instead of solving a problem with many local maxima, we identify the core regions by an SV method with a global optimal solution. We have found examples where a local maximum is hard to identify by Roberts' method. 3.2 The iris data We ran SVC on the iris data set [9], which is a standard benchmark in the pattern recognition literature. It can be obtained from the UCI repository [10]. The data set contains 150 instances, each containing four measurements of an iris flower. There are three types of flowers, represented by 50 instances each. We clustered the data in a two dimensional subspace formed by the first two principal components. One of the clusters is linearly separable from the other two at q = 0.5 with no bounded SVs. The remaining two clusters have significant overlap, and were separated at q = 4.2, l/(NC) = 0.55, with 4 misclassifications. Clustering results for an increasing number of principal components are reported Table 1: Performance of SVC on the iris data for a varying number of principal components. Principal components q 1-2 4.2 1-3 7.0 1-4 9.0 1/(NC) 0.55 0.70 0.75 SVs 20 23 34 bounded SVs 72 94 96 rnisclassified 4 4 14 in Table 1. Note that as the number of principal components is increased from 3 to 4 there is a degradation in the performance of the algorithm - the number of misclassifications increases from 4 to 14. Also note the increase in the number of support vectors and bounded support vectors required to obtain contour splitting. As the dimensionality of the data increases a larger number of support vectors is required to describe the contours. Thus if the data is sparse, it is better to use SVC on a low dimensional representation, obtained, e.g. by principal component analysis [2]. For comparison we quote results obtained by other non-parametric clustering algorithms: the information theoretic approach of [11] leads to 5 miscalssification and the SPC algorithm of [12] has 15 misclassifications. 4 Varying q and C SVC was described for fixed values of q and C, and a method for exploring parameter space is required. We can work with SVC in an agglomerative fashion, starting from a large value of q, where each point is in a different cluster, and decreasing q until there is a single cluster. Alternatively we may use the divisive approach, by starting from a small value of q and increasing it. The latter seems more efficient since meaningful clustering solutions (see below for a definition of this concept), usually have a small number of clusters. The following is a qualitative schedule for varying the parameters. One may start with a small value of q where only one cluster occurs: q = 1/ maxi,j Ilxi Xj 112. q is then increased to look for values at which a cluster contour splits. When single point clusters start to break off or a large number of support vectors is obtained (overfitting, as in Figure 2(a» I/C is increased. An important issue in the divisive approach is the decision when to stop dividing the clusters. An algorithm for this is described in [13]. After clustering the data they partition the data into two sets with some sizable overlap, perform clustering on these smaller data sets and compute the average overlap between the two clustering solutions for a number of partitions. Such validation can be performed here as well. However, we believe that in our SV setting it is natural to use the number of support vectors as an indication of a meaningful solution, since their (small) number is an indication of good generalization. Therefore we should stop the algorithm when the fraction of SVs exceeds some threshold. If the clustering solution is stable with respect to changes in the parameters this is also an indication of meaningful clustering. The quadratic programming problem of equation (2) can be solved by the SMO algorithm [14] which was recently proposed as an efficient tool for solving such problems in SVM training. Some minor modifications are required to adapt it to the problem that we solve here [4]. Benchmarks reported in [14] show that this algorithm converges in most cases in 0(N2) kernel evaluations. The complexity of the labeling part of the algorithm is 0(N2d), so that the overall complexity is 0(N2d). We also note that the memory requirements of the SMO algorithm are low - it can be implemented using 0(1) memory at the cost of a decrease in efficiency, which makes our algorithm useful even for very large data-sets. 5 Summary The SVC algorithm finds clustering solutions together with curves representing their boundaries via a description of the support or high density regions of the data. As such, it separates between clusters according to gaps or low density regions in the probability distribution of the data, and makes no assumptions on cluster shapes in input space. SVC has several other attractive features: the quadratic programming problem of the cluster description algorithm is convex and has a globally optimal solution, and, like other SV algorithms, SVC can deal with noise or outliers by a margin parameter, making it robust with respect to noise in the data. References [1] A.K. Jain and R.C. Dubes. Algorithmsfor clustering data. Prentice Hall, Englewood Cliffs, NJ, 1988. [2] K. Fukunaga. Introduction to Statistical Pattern Recognition. Academic Press, San Diego, CA, 1990. [3] V. Vapnik. The Nature of Statistical Learning Theory. Springer, N.Y., 1995. [4] B. Sch6lkopf, R.C. Williamson, AJ. Smola, and J. Shawe-Taylor. SV estimation of a distribution's support. In Neural Information Processing Systems, 2000. [5] D.MJ. Tax and R.P.W. Duin. Support vector domain description. Pattern Recognition Letters, 20:1991-1999, 1999. [6] A. Ben-Hur, D. Horn, H.T. Siegelmann, and V. Vapnik. A support vector clustering method. In International Conference on Pattern Recognition, 2000. [7] SJ. Roberts. Non-parametric unsupervised cluster analysis. Pattern Recognition, 30(2):261-272,1997. [8] R. Fletcher. Practical Methods of Optimization. Wiley-Interscience, Chichester, 1987. [9] R.A. Fisher. The use of multiple measurements in taxonomic problems. Annual Eugenics, 7: 179-188, 1936. [10] C.L. Blake and CJ. Merz. DCI repository of machine learning databases, 1998. [11] N. Tishby and N. Slonim. Data clustering by Markovian relaxation and the information bottleneck method. In Neural Information Processing Systems, 2000. [12] M. Blatt, S. Wiseman, and E. Domany. Data clustering using a model granular magnet. Neural Computation, 9:1804-1842,1997. [13] S. Dubnov, R. EI-Yaniv, Y. Gdalyahu, E. Schneidman, N. Tishby, and G. Yona. A new nonparametric pairwise clustering algorithm. Submitted to Machine Learning. [14] J. Platt. Fast training of support vector machines using sequential minimal optimization. In B. SchOlkopf, C. J. C. Burges, and A. 1. Smola, editors, Advances in Kernel Methods Support Vector Learning, pages 185-208, Cambridge, MA, 1999. MIT Press.
2000
19
1,816
Redundancy and Dimensionality Reduction in Sparse-Distributed Representations of Natural Objects in Terms of Their Local Features Penio S. Penev* Laboratory of Computational Neuroscience The Rockefeller University 1230 York Avenue, New York, NY 10021 penev@rockefeller.edu http://venezia.rockefeller.edu/ Abstract Low-dimensional representations are key to solving problems in highlevel vision, such as face compression and recognition. Factorial coding strategies for reducing the redundancy present in natural images on the basis of their second-order statistics have been successful in accounting for both psychophysical and neurophysiological properties of early vision. Class-specific representations are presumably formed later, at the higher-level stages of cortical processing. Here we show that when retinotopic factorial codes are derived for ensembles of natural objects, such as human faces, not only redundancy, but also dimensionality is reduced. We also show that objects are built from parts in a non-Gaussian fashion which allows these local-feature codes to have dimensionalities that are substantially lower than the respective Nyquist sampling rates. 1 Introduction Sensory systems must take advantage of the statistical structure of their inputs in order to process them efficiently, both to suppress noise and to generate compact representations of seemingly complex data. Redundancy reduction has been proposed as a design principle for such systems (Barlow, 1961); in the context of Information Theory (Shannon, 1948), it leads to factorial codes (Barlow et aI., 1989; Linsker, 1988). When only the secondorder statistics are available for a given sensory ensemble, the maximum entropy initial assumption (Jaynes, 1982) leads to a multi-dimensional Gaussian model of the probability density; then, the Karhunen-Loeve Transform (KLT) provides a family of equally efficient factorial codes. In the context of the ensemble of natural images, with a specific model for the noise, these codes have been able to account quantitatively for the contrast sensitivity of human subjects in all signal-to-noise regimes (Atick and Redlich, 1992). Moreover, when the receptive fields are constrained to have retinotopic organization, their circularly symmetric, center-surround opponent structure is recovered (Atick and Redlich, 1992). Although redundancy can be reduced in the ensemble of natural images, because its spectrum obeys a power law (Ruderman and Bialek, 1994), there is no natural cutoff, and the • Present address: NEC Research Institute, 4 Independence Way, Princeton, NJ 08550 dimensionality of the "retinal" code is the same as that of the input. This situation is not typical. When KLT representations are derived for ensembles of natural objects, such as human faces (Sirovich and Kirby, 1987), the factorial codes in the resulting families are naturally low-dimensional (Penev, 1998; Penev and Sirovich, 2000). Moreover, when a retinotopic organization is imposed, in a procedure called Local Feature Analysis (LFA), the resulting feed-forward receptive fields are a dense set of detectors for the local features from which the objects are built (Penev and Atick, 1996). LFA has also been used to derive local features for the natural-objects ensembles of: 3D surfaces of human heads (Penev and Atick, 1996), and 2D images of pedestrians (Poggio and Girosi, 1998). Parts-based representations of object classes, including faces, have been recently derived by Non-negative Matrix Factorization (NMF) (Lee and Seung, 1999), "biologically" motivated by the hypothesis that neural systems are incapable of representing negative values. As has already been pointed out (Mel, 1999), this hypothesis is incompatible with a wealth of reliably documented neural phenomena, such as center-surround receptive field organization, excitation and inhibition, and ON/OFF visual-pathway processing, among others. Here we demonstrate that when parts-based representations of natural objects are derived by redundancy reduction constrained by retinotopy (Penev and Atick, 1996), the resulting sparse-distributed, local-feature representations not only are factorial, but also are of dimensionalities substantially lower than the respective Nyquist sampling rates. 2 Compact Global Factorial Codes of Natural Objects A properly registered and normalized object will be represented by the receptor readout values ¢>(x), where {x} is a grid that contains V receptors. An ensemble ofT objects will be denoted by {¢>t (X)}tET. 1 Briefly (see, e.g., Sirovich and Kirby, 1987, for details), when T > V, its Karhunen-Loeve Tran,lform (KLT) representation is given by v ¢>t(x) = L a~O'r1Pr(x) (1) r=1 where {an (arranged in non-increasing order) is the eigen.lpectrum of the spatial and temporal correlation matrices, and {1Pr (x)} and {a~} are their respective orthonormal eigenvectors. The KLT representation of an arbitrary, possibly out-of-sample, object ¢>(x) is given by the joint activation (2) of the set of global analysis filters {O';I1Pr(x)}, which are indexed with r, and whose outputs, {ar } , are decorrelated.2 In the context of the ensemble of natural images, the "whitening" by the factor 0';1 has been found to account for the contrast sensitivity of human subjects (Atick and Redlich, 1992). When the output dimensionality is set to N < V, the reconstruction-optimal in the amount of preserved signal power-and the respective error utilize the global synthesis filters {O'r1Pr (x)}, and are given by N ¢>ISc = Lara r1Pr and ¢>JJ'" = ¢> - ¢>ISc. (3) r=1 iPor the illustrations in this study, 2T = T = 11254 frontal-pose facial images were registered and normalized to a grid with V = 64 x 60 = 3840 pixels as previously described (Penev and Sirovich, 2000). 2This is certainly true for in-sample objects, since {a~} are orthonormal (1). Por out-of-sample objects, there is always the issue whether the size of the training sample, T , is sufficient to ensure proper generalization. The current ensemble has been found to generalize well in the regime for r that is explored here (Penev and Sirovich, 2000). 20 60 100 150 220 350 500 700 1000 Figure 1: Successive reconstructions, errors, and local entropy densities. For the indicated global dimensionalities, N, the reconstructions cP'iV" (3) of an out-oj-sample example are shown in the top row, and the respective residual errors, cP'Nr , in the middle row (the first two errors are amplified 5 x and the rest-20x). The respective entropy densities ON (5), are shown in the bottom, low-pass filtered with Fr,N = u; / (u; + UN 2) (cf. Fig. 3), and scaled adaptively at each N to fill the available dynamic range. With the standard multidimensional Gaussian model for the probability density P[4>] (Moghaddam and Pentland, 1997; Penev, 1998), the information content of the reconstruction (3)-equal to the optimal-code length (Shannon, 1948; Barlow, I96I)-is N -logP[4>rJC] ex Z)ar I2 . (4) r=l Because of the normalization by fIr in (2), all KLT coefficients have unit variance 0); the model (4) is spherically symmetric, and all filters contribute equally to the entropy of the code. What criterion, then, could guide dimensionality reduction? Following (Atick and Redlich, 1992), when noise is taken into account, N ~ 400 has been found as an estimate of the global dimensionality for the ensemble frontal-pose faces (Penev and Sirovich, 2000). This conclusion is reinforced by the perceptual quality of the successive reconstructions and errors, shown in Fig. I-the face-specific information crosses over from the error to the reconstruction at N ~ 400, but not much earlier. 3 Representation of Objects in Terms of Local Features It was shown in Section 2 that when redundancy reduction on the basis of the second-order statistics is applied to ensembles of natural objects, the resulting factorial code is compact (low dimensional), in contrast with the "retinal" code, which preserves the dimensionality of the input (Atick and Redlich, 1992). Also, the filters in the beginning of the hierarchy (Fig. 2) correspond to intuitively understandable sources of variability. Nevertheless, this compact code has some problems. The learned receptive fields, shown in Fig. 2, are global, in contrast with the local, retinotopic organization of sensory processing, found throughout most of the visual system. Moreover, although the eigenmodes in the regime r E [100,400] are clearly necessary to preserve the object-specific information (Fig. 1), their respective global filters (Fig. 2) are ripply, non-intuitive, and resemble the hierarchy of sine/cosine modes of the translationally invariant ensemble of natural images. In order to cope with these problems in the context of object ensembles, analogously to the local factorial retinal code (Atick and Redlich, 1992), Local Feature Analysis (LFA) has been developed (Penev and Atick, 1996; Penev, 1998). LFA uses a set of local analysisfilFigure 2: The basis-vector hierarchy of the global factorial code. Shown are the first 14 eigenvectors, and the ones with indices: 21,41; and 60,94, 155,250,500, 1000,2000,3840 (bottom row). centers a b c d e Figure 3: Local feature detectors and residual correlations of their outputs. centers: The typical face ("pI, Fig. 2) is marked with the central positions of five of the feature detectors. a-e: For those choises of x"', the local filters K(x", , y) (6) are shown in the top row, and the residual correlations of their respective outputs with the outputs of all the rest, P(x", , y) (9), in the bottom. In principle, the cutoff at r = N, which effectively implements a low-pass filter, should not be as sharp as in (6)-it has been shown that the human contrast sensitivity is described well by a smooth cutoff of the type Fr = u;/(u; + n 2 ), where n 2 is a mearure of the effective noise power (Atick and Redlich, 1992). For this figure, K(x,y) = L~=I"pr(X):~"pT(Y)' with N = 400 andn = U400. ters, K(x, y), whose outputs are topographically indexed with the grid variable x (cf. eq. 2) L'. 1 " O(x) = V ~ K(x,y)¢(y) (5) y and are as decorrelated as possible. For a given dimensionality, or width of the band, of the compact code, N, maximal decorrelation can be achieved with K(x,y) = K~)(x, y) from the following topographic family of kernels N K~)(x,y) ~ L '¢r(x)u;:n'¢r(Y)· (6) r=l For the ensemble of natural scenes, which is translationally and rotationally invariant, the local filters (6) are center-surround receptive fields (Atick and Redlich, 1992). For object ensembles, the process of construction-categorization-breaks a number of symmetries and shifts the higher-order statistics into second-order, where they are conveniently exposed to robust estimation and, subsequently, to redundancy reduction. The resulting local receptive fields, some of which are shown in the top row of Fig. 3, turn out to be feature detectors that are optimally tuned to the structures that appear at their respective centers. Although the local factorial code does not exhibit the problems discussed earlier, it has representational properties that are equivalent to those of the global factorial code. The reconstruction and error are identical, but now utilize the local synthesis filers KJ;l) (6) N ¢W(x) = Larur'lj;r(X) = ~ L KJ;l) (x, y)O(y) (7) r=l y and the information (4) is expressed in terms of O(x), which therefore provides the local information density N 1 -logP[¢~CllX L larl2 = V L ION(XW· r=l x (8) 4 Greedy Sparsification of the Smooth Local Information Density In the case of natural images, N = V, and the outputs of the local filters are completely decorrelated (A tick and Redlich, 1992). For natural objects, the code is low-dimensional (N < V), and residual correlations, some shown in the bottom row of Fig. 3, are unavoidable; they are generally given by the projector to the sub band PN(X,y) ~ ~ LO~(x)O~(y) == K~)(x,y) (9) t and are as close to 8(x, y) as possible (Penev and Atick, 1996). The smoothness of the local information density is controlled by the width of the band, as shown in Fig. 1. Since O(x) is band limited, it can generally be reconstructed exactly from a subsampling over a limited set of grid points M ~ {xm}, from the IMI variables {Om ~ O(Xm)}x~EM' as long as this density is critically sampled (1M I = N). When 1M I < N, the maximum-likelihood interpolation in the context of the probability model (8) is given by IMI IMI orec(x) = L Omam(x) with am(x) = L Q-l mnPn(x) (10) m=l n=l where Pm(x) ~ P(xm,x), and Q ~ PIM is the restriction ofP on the set of reference points, with Qnm = Pn(xm) (Penev, 1998). When O(x) is critically sampled (IMI = N) on a regular grid, V -t 00, and the eigenmodes (1) are sines and cosines, then (10) is the familiar Nyquist interpolation formula. In order to improve numerical stability, irregular subsampling has been proposed (Penev and Atick, 1996), by a data-driven greedy algorithm that successively enlarges the support of the subsampling at the n-th step, M(n), by optimizing for the residual entropy error, Ilo~rr (x) 112 = IIO(x) o~ec (x) 112. The LFA code is sparse. In a recurrent neural-network implementation (Penev, 1998) the dense output O(x) of the feed-forward receptive fields, K(x,y), has been interpreted as sub-threshold activation, which is predictively suppressed through lateral inhibition with weights Pm (x), by the set of active units, at {xm}.3 5 Dimensionality Reduction Beyond the Nyquist Sampling Rate The efficient allocation of resources by the greedy sparsification is evident in Fig. 4A-B; the most prominent features are picked up first (Fig. 4A), and only a handful of active units are used to describe each individual local feature (Fig. 4B). Moreover, when the dimensionality of the representation is constrained, evidently from Fig. 4C-F, the sparse local code has a much better perceptual quality of the reconstruction than the compact global one. 3This type of sparseness is not to be confused with "high kurtosis of the output distribution;" in LFA, the non-active units are completely shut down, rather than "only weakly activated." A B c D E F Figure 4: Efficiency of the sparse allocation of resources. (A): the locations of the first 25 active units, M(25), of the sparsification with N = 220, n = 0"400 (see Fig. 3), of the example in Fig. 1 and in (C), are overlayed on ¢(x) and numbered sequentially. (B): the locations of the active units in M(64) are overlayed on O(x). For ¢(x) in (C) (cf. Fig. 1), reconstructions with a fixed dimensionality, 64, of its deviation from the typical face (-1/;1 in Fig. 2), are shown in the top row of (D, E, F), and the respective errors, in the bottom row. (D): reconstruction from the sparsification {O(Xm)}X=EM (10) with M = M(64) from (B). (E): reconstruction from the first 64 global coefficients (3), N = 64. (F): reconstruction from a subsampling of ¢(x) on a regular 8 x 8 grid (64 samples). The errors in (D) and (E) are magnified 5 x ; in (F), 1 x . 600 0 E 500 " • i 400 ~ 300 "' ~ ] 001 ~ ~ i 0001 a: .. 0 ~ 00001 100 200 300 400 500 SOO Number of Reconstruction Terms 0 A 01 02 03 04 0.5 0.6 07 OB Rallo of Sparse and Global Dimensionalilies B Figure 5: The relationship between the dimensionalities of the global and the local factorial codes. The entropy of the KLT reconstruction (8) for the out-of-sample example (cf. Fig. 1) is plotted in (A) with a solid line as a function of the global dimensionality, N. The entropies of the LFA reconstructions (10) are shown with dashed lines parametrically of the number of active units 1M I for N E {600, 450, 300, 220, 110,64, 32}, from top to bottom, respectively. The ratios of the residual, 110~rr 112, and the total, 110112 (8), information are plotted in (B) with dashed lines parametrically of 1M II N, for the same values of N; a true exponential dependence is plotted with a solid line. This is an interesting observation. Although the global code is optimal in the amount of captured energy, the greedy sparsification optimizes the amount of captured information, which has been shown to be the biologically relevant measure, at least in the retinal case (Atick and Redlich, 1992). In order to quantify the relationship between the local dimensionality of the representation and the amount of information it captures, rate-distortion curves are shown in Fig. 5. As expected (4), each degree of freedom in the global code contributes approximately equally to the information content. On the other hand, the first few local terms in (10) pull off a sizeable fraction of the total information, with only a modest increase thereafter (Fig. 5A). In all regimes for N, the residual information decreases approximately exponentially with increasing dimensionality ratio IMI/N (Fig. 5B); 90% of the information is contained in a representation with local dimensionality, 25%-30% of the respective global one; 99%, with 45%-50%. This exponential decrease has been shown to be incompatible with the expectation based on the Gaussian (4), or any other spherical, assumption (Penev, 1999). Hence, the LFA representation, by learning the building blocks of natural objects-the local features-reduces not only redundancy, but also dimensionality. Because LFA captures aspects of the sparse, non-Gaussian structure of natural-object ensembles, it preserves practically all of the information, while allocating resources substantially below the Nyquist sampling rate. 6 Discussion Here we have shown that, for ensembles of natural objects, with low-dimensional global factorial representations, sparsification of the local information density allows undersampling which results in a substantial additional dimensionality reduction. Although more general ensembles, such as those of natural scenes and natural sound, have full-dimensional global representations, the sensory processing of both visual and auditory signals happens in a multi-scale, bandpass fashion. Preliminary results (Penev and Iordanov, 1999) suggest that sparsification within the subbands is possible beyond the respective Nyquist rate; hence, when the sparse dimensionalities of the sub bands are added together, the result is aggregate dimensionality reduction, already at the initial stages of sensory processing. Acknowledgments The major part of this research was made possible by the William o 'Baker Fellowship, so generously extended to, and gratefully accepted by, the author. He is also indebted to M. J. Feigenbaum for his hospitality and support; to MJF and A. J. Libchaber, for the encouraging and enlightening discussions, scientific and otherwise; to R. M. Shapley, for asking the questions that led to Fig. 5; to J. J. Atick, B. W. Knight, A. G. Dimitrov, L. Sirovich, J. D. Victor, E. Kaplan, L. G. Iordanov, E. P. Simoncelli, G. N. Reeke, J. E. Cohen, B. Klejn, A. Oppenheim, and A. P. Blicher for fruitful discussions. References Atick, J. J. and A. N. Redlich (1992). What does the retina know about natural scenes? Neural Comput. 4(2), 196-210. Barlow, H. B. (1961). Possible principles unredlying the transformation of sensory messages. In W. Rosenblith (Ed.), Sensory Communication, pp. 217-234. Cambridge, MA: M.I.T. Press. Barlow, H. B., T. P. Kaushal, and G. J. Mitchison (1989). Finding minimum entropy codes. Neural Computation 1 (3), 412-423. Jaynes, E. T. (1982). On the rationale of maximum-entropy methods. Proc. IEEE 70,939-952. Lee, D. D. and H. S. Seung (1999). Learning the parts of objects by non-negative matrix factorization. Nature 401 (6755), 788-791. Linsker, R. (1988). Self-organization in a perceptual network. Computer 21,105-117. Mel, B. W. (1999). Think positive to find parts. Nature 401 (6755),759- 760. Moghaddam, B. and A. Pentland (1997). Probabilistic visual learning for object representation. IEEE Trans. on Pattern Analysis and Machine Intelligence 19(7), 669- 710. Penev, P. S. (1998). Local Feature Analysis: A Statistical Theory for Information Representation and Transmission. Ph. D. thesis, The Rockefeller University, New York, NY. available at http: / /venezia.rockefeller.edu/penev /thesis/. Penev, P. S. (1999). Dimensionality reduction by sparsification in a local-features representation of human faces. Technical report, The Rockefeller University. ftp:/ /venezia.rockefeller.edu/ pubs / Penev PS-NIPS99-reduce. ps. Penev, P. S. and J. J. Atick (1996). Local Feature Analysis: A general statistical theory for object representation. Network: Comput. Neural Syst. 7(3), 477- 500. Penev, P. S. and L. G. Iordanov (1999). Local Feature Analysis: A flexible statistical framework for dimensionality reduction by sparsification of naturalistic sound. Technical report, The Rockefeller University. ftp:/ /venezia.rockefeller.edu/pubs/PenevPS-ICASSP2000-sparse.ps. Penev, P. S. and L. Sirovich (2000). The global dimensionality of face space. In Proc. 4th Int'l Coni. Automatic Face and Gesture Recognition, Grenoble, France, pp. 264--270. IEEE CS. Poggio, T. and F. Girosi (1998). A sparse representation for function approximation. Neural Comput. 10(6), 1445-1454. Ruderman, D. L. and W. Bialek (1994). Statistics of natural images: Scaling in the woods. Phys. Rev. Lett. 73(6),814--817. Shannon, C. E. (1948). A mathematical theory of communication. Bell System Tech. 1. 27, 379-423, 623- 656. Sirovich, L. and M. Kirby (1987). Low-dimensional procedure for the characterization of human faces. 1. Opt. Soc. Am. A 4,519-524.
2000
2
1,817
Incremental and Decremental Support Vector Machine Learning Gert Cauwenberghs* CLSP, ECE Dept. Johns Hopkins University Baltimore, MD 21218 gert@jhu.edu Tomaso Poggio CBCL, BCS Dept. Massachusetts Institute of Technology Cambridge, MA 02142 tp@ai.mit.edu Abstract An on-line recursive algorithm for training support vector machines, one vector at a time, is presented. Adiabatic increments retain the KuhnTucker conditions on all previously seen training data, in a number of steps each computed analytically. The incremental procedure is reversible, and decremental "unlearning" offers an efficient method to exactly evaluate leave-one-out generalization performance. Interpretation of decremental unlearning in feature space sheds light on the relationship between generalization and geometry of the data. 1 Introduction Training a support vector machine (SVM) requires solving a quadratic programming (QP) problem in a number of coefficients equal to the number of training examples. For very large datasets, standard numeric techniques for QP become infeasible. Practical techniques decompose the problem into manageable subproblems over part of the data [7, 5] or, in the limit, perform iterative pairwise [8] or component-wise [3] optimization. A disadvantage of these techniques is that they may give an approximate solution, and may require many passes through the dataset to reach a reasonable level of convergence. An on-line alternative, that formulates the (exact) solution for £ + 1 training data in terms of that for £ data and one new data point, is presented here. The incremental procedure is reversible, and decremental "unlearning" of each training sample produces an exact leave-one-out estimate of generalization performance on the training set. 2 Incremental SVM Learning Training an SVM "incrementally" on new data by discarding all previous data except their support vectors, gives only approximate results [II). In what follows we consider incrementallearning as an exact on-line method to construct the solution recursively, one point at a time. The key is to retain the Kuhn-Tucker (KT) conditions on all previously seen data, while "adiabatically" adding a new data point to the solution. 2.1 Kuhn-Thcker conditions In SVM classification, the optimal separating function reduces to a linear combination of kernels on the training data, f(x) = Ej QjYjK(xj, x) + b, with training vectors Xi and corresponding labels Yi = ±l. In the dual formulation of the training problem, the ·On sabbatical leave at CBCL in MIT while this work was performed. W~Whl · W~ . · . . · . . · . . · . . · . . : g -0 : : pO ~ '~ : g,~o: a,=O C a, C a,=C , , , ~ "" ox, ~"" x ~"" '...... Q. ... f ......... x, ", ", 0 ", , , , , , , support vector error vector Figure 1: Soft-margin classification SVM training. coefficients ai are obtained by minimizing a convex quadratic objective function under constraints [12] min : W = ~ '"' a·Q··a· - '"' a · + b'"' Y'a' o<"'.<c 2 ~ t tJ J ~ t ~ t t t_ i,j i i (1) with Lagrange multiplier (and offset) b, and with symmetric positive definite kernel matrix Qij = YiyjK(Xi,Xj). The first-order conditions on W reduce to the Kuhn-Tucker (KT) conditions: aw ab L Qijaj + Yib - 1 = yd(Xi) - 1 j j { ~ OJ =OJ ~ OJ ai = 0 0< ai < C (2) ai = C (3) which partition the training data D and corresponding coefficients {ai, b}, i = 1, ... i, in three categories as illustrated in Figure 1 [9]: the set S of margin support vectors strictly on the margin (yd(Xi) = 1), the set E of error support vectors exceeding the margin (not necessarily misc1assified), and the remaining set R of (ignored) vectors within the margin. 2.2 Adiabatic increments The margin vector coefficients change value during each incremental step to keep all elements in D in equilibrium, i.e., keep their KT conditions satisfied. In particular, the KT conditions are expressed differentially as: ~9i Qic~ac + L Qij~aj + Yi~b, 'Vi E D U {c} (4) jES o = Ye~ae + LYj~aj (5) jES where a e is the coefficient being incremented, initially zero, of a "candidate" vector outside D. Since 9i == 0 for the margin vector working set S = {Sl, .. . Sis}, the changes in coefficients must satisfy [ ~b 1 [ y, 1 ~aSl QSle Q. = ~ae (6) ~aSts Qs~se with symmetric but not positive-definite Jacobian Q: [ 0 YSl YSts 1 YSl QS1Sl QS1Sts Q= (7) YS~s QSts Sl QStsSts Thus, in equilibrium t!.b (3t!.ac t!.aj = (3j t!.ac , with coefficient sensitivities given by \::Ij E D (8) (9) (10) where R = Q-l, and (3j == 0 for all j outside S. Substituted in (4), the margins change according to: t!.9i = 'Yit!.ac, \::Ii E D U {c} (11) with margin sensitivities 'Yi = Qic + L Qij(3j + Yi(3, jES and 'Yi == 0 for all i in S. 2.3 Bookkeeping: upper limit on increment t!.ac (12) It has been tacitly assumed above that t!.ac is small enough so that no element of D moves across S, E and/or R in the process. Since the aj and 9i change with a c through (9) and (11), some bookkeeping is required to check each of the following conditions, and determine the largest possible increment t!.ac accordingly: 1. gc :::; 0, with equality when c joins S; 2. a c :::; C, with equality when c joins E; 3. 0:::; aj :::; C, Vj E S, with equality 0 when j transfers from S to R, and equality C when j transfers from S to E; 4. gi :::; 0, Vi E E, with equality when i transfers from E to S; 5. gi :::: 0, Vi E R, with equality when i transfers from R to S. 2.4 Recursive magic: R updates To add candidate c to the working margin vector set S, R is expanded as: The same formula applies to add any vector (not necessarily the candidate) c to S, with parameters (3, (3j and 'Yc calculated as (10) and (12). The expansion ofR, as incremental learning itself, is reversible. To remove a margin vector k from S, R is contracted as: Rij +- Rij - Rkk -lRik Rkj where index 0 refers to the b-term. \::Ii,j E S U {O}; i,j :f. k (14) The R update rules (13) and (14) are similar to on-line recursive estimation of the covariance of (sparsified) Gaussian processes [2]. a, /+1 c support vector c: a, . c error vector Figure 2: Incremental learning. A new vector, initially for O:c = 0 classified with negative margin 9c < 0, becomes a new margin or error vector. 2.5 Incremental procedure LeU! --* i+ 1, by adding point c (candidate margin or error vector) to D: Dl+l = DiU{ c}. Then the new solution {o:f+1, bi +1 }, i = 1, ... i + 1 is expressed in terms of the present solution {o:f, bi}, the present Jacobian inverse n, and the candidate Xc, Yc, as: Algorithm 1 (Incremental Learning, £ --* £ + 1) 1. Initialize ne to zero; 2. If ge > 0, terminate (c is not a margin or error vector); 3. If ge :S 0, apply the largest possible increment ne so that (the first) one of the following conditions occurs: (a) ge = 0: Add c to margin set S, update R accordingly, and terminate; (b) ne = C: Add c to error set E , and terminate; (c) Elements of Dl migrate across S, E, and R ("bookkeeping," section 2.3): Update membership of elements and, if S changes, update R accordingly. and repeat as necessary. The incremental procedure is illustrated in Figure 2. Old vectors, from previously seen training data, may change status along the way, but the process of adding the training data c to the solution converges in a finite number of steps. 2.6 Practical considerations The trajectory of an example incremental training session is shown in Figure 3. The algorithm yields results identical to those at convergence using other QP approaches [7], with comparable speeds on various datasets ranging up to several thousands training points L. A practical on-line variant for larger datasets is obtained by keeping track only of a limited set of "reserve" vectors: R = {i E D I 0 < 9i < t}, and discarding all data for which 9i ~ t. For small t, this implies a small overhead in memory over Sand E. The larger t, the smaller the probability of missing a future margin or error vector in previous data. The resulting storage requirements are dominated by that for the inverse Jacobian n, which scale as (is)2 where is is the number of margin support vectors, #S. 3 Decremental "Unlearning" Leave-one-out (LOO) is a standard procedure in predicting the generalization power of a trained classifier, both from a theoretical and empirical perspective [12]. It is naturally implemented by decremental unlearning, adiabatic reversal of incremental learning, on each of the training data from the full trained solution. Similar (but different) bookkeeping of elements migrating across S, E and R applies as in the incremental case. I Matlab code and data are available at http://bach.ece.jhu.eduJpublgertlsvmlincremental. .. - 2 - ) - 2 - ) 20 40 60 80 100 Iteration Figure 3: Trajectory of coefficients Q:i as a function of iteration step during training, for £ = 100 non-separable points in two dimensions, with C = 10, and using a Gaussian kernel with u = 1. The data sequence is shown on the left. gc'" -1 --------------------1 ~ -------------~-'" gc Figure 4: Leave-one-out (LOO) decremental unlearning (Q:c ~ 0) for estimating generalization performance, directly on the training data. gc \c < -1 reveals a LOO classification error. 3.1 Leave-oDe-out procedure LeU ~ £ - 1, by removing point c (margin or error vector) from D: D\c = D \ {c}. The solution {Q:i \c, b\C} is expressed in terms of {Q:i' b}, R and the removed point Xc, Yc. The solution yields gc \c, which determines whether leaving c out of the training set generates a classification error (gc \c < -1). Starting from the full i-point solution: Algorithm 2 (Decremental Unlearning, £ ---+ £ - 1, and LOO Classification) 1. Ifc is not a margin or error vector: Terminate, "correct" (c is already left out, and correctly classified); 2. If c is a margin or error vector with gc < -1: Terminate, "incorrect" (by default as a training error); 3. If c is a margin or error vector with gc ~ -1, apply the largest possible decrement Q c so that (the first) one of the following conditions occurs: (a) gc < -1: Terminate, "incorrect"; (b) Q c = 0: Terminate, "correct"; (c) Elements of Dl migrate across S, E, and R: Update membership of elements and, if S changes, update R accordingly. and repeat as necessary. The leave-one-out procedure is illustrated in Figure 4. o - 0.2 - 0.4 <KJ~ - 0.6 - 0.8 - 1 a. c Figure 5: Trajectory of LOO margin Bc as a function of leave-one-out coefficient ac. The data and parameters are as in Figure 3. 3.2 Leave-one-out considerations If an exact LOO estimate is requested, two passes through the data are required. The LOO pass has similar run-time complexity and memory requirements as the incremental learning procedure. This is significantly better than the conventional approach to empirical LOO evaluation which requires £ (partial but possibly still extensive) training sessions. There is a clear correspondence between generalization performance and the LOO margin sensitivity'Yc. As shown in Figure 4, the value of the LOO margin Bc \c is obtained from the sequence of Bc vs. a c segments for each of the decrement steps, and thus determined by their slopes 'Yc. Incidentally, the LOO approximation using linear response theory in [6] corresponds to the first segment of the LOO procedure, effectively extrapolating the value of Bc \c from the initial value of 'Yc. This simple LOO approximation gives satisfactory results in most (though not all) cases as illustrated in the example LOO session of Figure 5. Recent work in statistical learning theory has sought improved generalization performance by considering non-uniformity of distributions in feature space [13] or non-uniformity in the kernel matrix eigenspectrum [10]. A geometrical interpretation of decremental unlearning, presented next, sheds further light on the dependence of generalization performance, through 'Yc, on the geometry of the data. 4 Geometric Interpretation in Feature Space The differential Kuhn-Tucker conditions (4) and (5) translate directly in terms of the sensitivities 'Yi and f3j as 'Yi Vi E D U {c} (15) jES o = Yc + LYjf3j. (16) jES Through the nonlinear map Xi = Yi'P(Xi) into feature space, the kernel matrix elements reduce to linear inner products: Qij = YiyjK(Xi,Xj) = Xi· Xj, Vi,j (17) and the KT sensitivity conditions (15) and (16) in feature space become 'Yi Xi· (Xc + L Xjf3j) + Yif3 jES Vi E D U {c} (18) o Yc + LYj{Jj. (19) jES Since 'Yi = 0, Vi E S, (18) and (19) are equivalent to minimizing a functional: mf3in: We = ~(Xc + L X j{Jj)2 , (20) J jES subject to the equality constraint (19) with Lagrange parameter {J. Furthermore, the optimal value of We immediately yields the sensitivity 'Yc, from (18): 'Yc = 2Wc = (Xc + L X j{Jj)2 ~ O. (21) jES In other words, the distance in feature space between sample c and its projection on S along (16) determines, through (21), the extent to which leaving out c affects the classification of c. Note that only margin support vectors are relevant in (21), and not the error vectors which otherwise contribute to the decision boundary. 5 Concluding Remarks Incremental learning and, in particular, decremental unlearning offer a simple and computationally efficient scheme for on-line SVM training and exact leave-one-out evaluation of the generalization performance on the training data. The procedures can be directly extended to a broader class of kernel learning machines with convex quadratic cost functional under linear constraints, including SV regression. The algorithm is intrinsically on-line and extends to query-based learning methods [1]. Geometric interpretation of decremental unlearning in feature space elucidates a connection, similar to [13], between generalization performance and the distance of the data from the subspace spanned by the margin vectors. References [1] C. Campbell, N. Cristianini and A. Smola, "Query Learning with Large Margin Classifiers," in Proc. 17th Tnt. Con! Machine Learning (TCML2000), Morgan Kaufman, 2000. [2] L. Csato and M. Opper, "Sparse Representation for Gaussian Process Models," in Adv. Neural Information Processing Systems (NIPS'2000), vol. 13, 2001. [3] T.-T. FrieB, N. Cristianini and C. Campbell, "The Kernel Adatron Algorithm: A Fast and Simple Learning Procedure for Support Vector Machines," in 15th Tnt. Con! Machine Learning, Morgan Kaufman, 1998. [4] T.S. Jaakkola and D. Haussler, "Probabilistic Kernel Methods," Proc. 7th Int. Workshop on Artificial Tntelligence and Statistics, 1998. [5] T. Joachims, "Making Large-Scale Support Vector Machine Leaming Practical," in Scholkopf, Burges and Smola, Eds., Advances in Kernel Methods- Support Vector Learning, Cambridge MA: MIT Press, 1998, pp 169-184. [6] M. Opper and O. Winther, "Gaussian Processes and SVM: Mean Field Results and Leave-OneOut," Adv. Large Margin Classifiers, A. Smola, P. Bartlett, B. SchOlkopf and D. Schuurmans, Eds., Cambridge MA: MIT Press, 2000, pp 43-56. [7] E. Osuna, R. Freund and F. Girosi, ''An Improved Training Algorithm for Support Vector Machines," Proc. 1997 TEEE Workshop on Neural Networks for Signal Processing, pp 276-285, 1997. [8] J.C. Platt, "Fast Training of Support Vector Machines Using Sequential Minimum Optimization," in Scholkopf, Burges and Smola, Eds., Advances in Kernel Methods- Support Vector Learning, Cambridge MA: MIT Press, 1998, pp 185-208. [9] M. Pontil and A. Verri, "Properties of Support Vector Machines," it Neural Computation, vol. 10, pp 955-974, 1997. [10] B. Scholkopf, J. Shawe-Taylor, A.J. Smola and R.C. Williamson, "Generalization Bounds via Eigenvalues of the Gram Matrix," NeuroCOLT, Technical Report 99-035, 1999. [11] N.A. Syed, H. Liu and K.K. Sung, "Incremental Learning with Support Vector Machines," in Proc. Int. foint Con! on Artificial Intelligence (IJCAI-99), 1999. [12] V. Vapnik, The Nature of Statistical Learning Theory,' New York: Springer-Verlag, 1995. [13] V. Vapnik and O. Chapelle, "Bounds on Error Expectation for SVM," in Smola, Bartlett, Scholkopf and Schuurmans, Eds., Advances in Large Margin Classifiers, Cambridge MA: MIT Press, 2000.
2000
20
1,818
Active inference in concept learning Jonathan D. Nelson Department of Cogniti ve Science University of California, San Diego La Jolla, CA 92093-0515 jnelson@cogsci.ucsd.edu Abstract Javier R. Movellan Department of Cognitive Science University of California, San Diego La Jolla, CA 92093-0515 movellan@inc.ucsd.edu People are active experimenters, not just passive observers, constantly seeking new information relevant to their goals. A reasonable approach to active information gathering is to ask questions and conduct experiments that maximize the expected information gain, given current beliefs (Fedorov 1972, MacKay 1992, Oaksford & Chater 1994). In this paper we present results on an exploratory experiment designed to study people's active information gathering behavior on a concept learning task (Tenenbaum 2000). The results of the experiment are analyzed in terms of the expected information gain of the questions asked by subjects. In scientific inquiry and in everyday life, people seek out information relevant to perceptual and cognitive tasks. Scientists perform experiments to uncover causal relationships; people saccade to informative areas of visual scenes, turn their head towards surprising sounds, and ask questions to understand the meaning of concepts. Consider a person learning a foreign language, who notices that a particular word, "tikos," is used for baby moose, baby penguins, and baby cheetahs. Based on those examples, he or she may attempt to discover what tikos really means. Logically, there are an infinite number of possibilities. For instance, tikos could mean baby animals, or simply animals, or even baby animals and antique telephones. Yet a few examples are enough for human learners to form strong intuitions about what meanings are most likely. Suppose you can point to a baby duck, an adult duck, or an antique telephone, to inquire whether that object is "tikos." Your goal is to figure out what "tikos" means. Which question would you ask? Why? When the goal is to learn as much as possible about a set of concepts, a reasonable strategy is to choose those questions which maximize the expected information gain, given current beliefs (Fedorov 1972, MacKay 1992, Oaksford & Chater 1994). In this paper we present preliminary results on an experiment designed to quantify the information value of the questions asked by subjects on a concept learning task. 1.1 Tenenbaum's number concept task Tenenbaum (2000) developed a Bayesian model of number concept learning. The model describes the intuitive beliefs shared by humans about simple number concepts, and how those beliefs change as new information is obtained, in terms of subjective probabilities. Suppose a subject has been told that the number 16 is consistent with some unknown number concept. With its current parameters, the model predicts that the subjective probability that the number 8 will also be consistent with that concept is about 0.35 . Tenenbaum (2000) included both mathematical and interval concepts in his number concept space. Interval concepts were sets of numbers between nand m, where 1 ::; n ::; 100, and n ::; m ::; 100, such as numbers between 5 and 8, and numbers between 10 and 35. There were 33 mathematical concepts: odd numbers, even numbers, square numbers, cube numbers, prime numbers, multiples of n (3 ::; n ::; 12), powers of n (2 ::; n ::; 10), and numbers ending in n (1 ::; n ::; 9). Tenenbaum conducted a number concept learning experiment with 8 subjects and found a correlation of 0.99 between the average probability judgments made by subjects and the model predictions. To evaluate how well Tenenbaum's model described our population of subjects, we replicated his study, with 81 subjects. We obtained a correlation of .87 between model predictions and average subject responses. Based on these results we decided to extend Tenenbaum's experiment, and allow subjects to actively ask questions about number concepts, instead of just observing examples given to them. We used Tenenbaum's model to obtain estimates of the subjective probabilities of the different concepts given the examples at hand. We hypothesized that the questions asked by subjects would have high information value, when information value was calculated according to the probability estimates produced by Tenenbaum's model. 1.2 Infomax sampling Consider the following problem. A subject is given examples of numbers that are consistent with a particular concept, but is not told the concept itself. Then the subject is allowed to pick a number, to test whether it follows the same concept as the examples given. For example, the subject may be given the numbers 2, 6 and 4 as examples of the underlying concept and she may then choose to ask whether the number 8 is also consistent with the concept. Her goal is to guess the correct concept. We formalize the problem using standard probabilistic notation: random variables are represented with capital letters and specific values taken by those variables are represented with small letters. The random variable C represents the correct concept on a given trial. Notation of the form "C=c" is shorthand for the event that the random variable C takes the specific value c. We represent the examples given to the subjects by the random vector X. The subject beliefs about which concepts are probable prior to the presentation of any examples is represented by the probability function p(e = c). The subject beliefs after the examples are presented is represented by p(e = c I X = x). For example, if c is the concept even numbers and x the numbers "2, 6, 4", then p(e = c I X = x) represents subjects' posterior probability that the correct concept is even numbers, given that 2, 6, and 4 are positive examples of that concept. The binary random variable Y n represents whether the number n is a member of the correct concept. For example, Y8 =1 represents the event that 8 is an element of the correct concept, and Y 8= 0 the event that 8 is not. In our experiment subjects are allowed to ask a question of the form "is the number n an element of the concept?". This is equivalent to finding the value taken by the random variable Yn , in our formalism. We evaluate how good a question is in terms of the information about the correct concept expected for that question, given the example vector X=x. The expected information gain for the question "Is the number n an element of the concept?" is given by the following formula: I(C'Yn IX =x)=H(CIX =x)-H(CIYn,X =x), where H(C I X = x)is the uncertainty (entropy) about of the concept C given the example numbers in x H(CIX =x)=-[P(C=cIX =x) log2 P(C=cIX =x), c and 1 - [P(C=cIX = x) [P(Yn =vIC=c,X =x) log2P(C=cIYn =v,X =x), CEC v=o is the uncertainty about C given the active question Yn and the example vector x. We consider only binary questions, of the form "is n consistent with the concept?" so the maximum information value of any question in our experiment is one bit. Note how information gain is relative to a probability model P of the subjects' internal beliefs. Here we approximate these subjective probabilities using Tenenbaum's (2000) number concept model. An information-maximizing strategy (infomax) prescribes asking the question with the highest expected information gain, e.g., the question that minimizes the expected entropy, over all concepts. Another strategy of interest is confirmatory sampling, which consists of asking questions whose answers are most likely to confirm current beliefs. In other domains it has been proposed that subjects have a bias to use confirmatory strategies regardless of their information value (Klayman & Ha 1987, Popper 1959, Wason 1960). Thus, it is interesting to see whether people use a confirmatory strategy on our concept learning task and to evaluate how informative such a strategy would be. 2 Human sampling in the number concept game Twenty-nine undergraduate students, recruited from Cognitive Science Department classes at the University of California, San Diego, participated in the experiment. 1 Subjects gave informed consent, and received either partial course credit for required study participation, or extra course credit, for their participation. The experiment began with the following instructions: Often it is possible to have a good idea about the state of the world, without completely knowing it. People often learn from examples, and this study explores how people do so. In this experiment, you will be given examples of a hidden number rule. These examples will be randomly chosen from the numbers between 1 and 100 that follow the rule. The true rule will remain hidden, however. Then you will be able to test an additional number, to see if it follows that same hidden rule. Finally, you will be asked to give your best estimation of what the true hidden rule is, and the chances that you are right. For instance, if the true hidden rule were "multiples of 11 ", you might see the examples 22 and 66. If you thought the rule were " multiples of 1 I ", but also possibly "even numbers ", you could test a number of your choice, between 1-100, to see if it also follows the rule. 1 Full stimuli are posted at http://hci.ucsd.edul-jnelson/pages/study.html On each trial subjects first saw a set of examples from the correct concept. For instance, if the concept were even numbers, subjects might see the numbers "2, 6, 4" as examples. Subjects were then given the opportunity to test a number of their choice. Subjects were given feedback on whether the number they tested was an element of the correct concept. We wrote a computer program that uses the probability estimates provided by Tenenbaum (2000) model to compute the information value of any possible question in the number concept task. We used this program to evaluate the information value of the questions asked by subjects, the questions asked by an infomax strategy, the questions asked by a confirmatory strategy, and the questions asked by a random sampling strategy. The infomax strategy was determined as described above. The random strategy consisted of randomly testing a number between 1 and 100 with equal probability. The confirmatory strategy consisted of testing the number (excluding the examples) that had the highest posterior probability, as given by Tenenbaum's model, of being consistent with the correct concept. 3 Results Results for nine representative trials are discussed. The trials are grouped into three types, according to the posterior beliefs of Tenenbaum's model, after the example numbers have been seen. The average information value of subjects' questions, and of each simulated sampling strategy, are given in Table 1. The specific questions subjects asked are considered in Sections 3.1-3.3. Trial type Single example, high Multiple example, Interval uncertainty low uncertainty Examples 16 60 37 16, 8, 60,80, 81,25, 16,23, 60,51, 81,98, 2, 64 10,30 4, 36 19,20 57, 55 96, 93 Subjects .70 .72 .73 .00 .06 0.00 .47 .37 .31 Infomax .97 1.00 1.00 .01 .32 0.00 1.00 .99 1.00 Confirmation .97 1.00 1.00 .00 .00 0.00 0.00 0.00 0.00 Random .35 .54 .52 .00 .04 0.00 .17 .20 .14 Table 1. Information value, as assessed using the subjective probabilities in Tenenbaum's number concept model, of several sampling strategies. Information value is measured in bits. 3.1 Single example, high uncertainty trials On these trials Tenenbaum's model is relatively uncertain about the correct concepts and gives some probability to many concepts. Interestingly, the confirmatory strategy is identical to the infomax strategy on each of these trials, suggesting that a confirmatory sampling strategy may be optimal under conditions of high uncertainty. Consider the trial with the example number 16. On this trial, the concepts powers of 4, powers of 2, and square numbers each have moderate posterior probability (.28, .14, and .09, respectively). These trials provided the best qualitative agreement between infomax predictions and subjects' sampling behavior. Unfortunately the results are inconclusive since on these trials both infomax and confirmatory strategy make the same predictions. On the trial with the example number 16, subjects' modal response (8 of 29 subjects) was to test the number 4. This was actually the most informative question, according to Tenenbaum's model. Several subjects (8 of 29) tested other square numbers, such as 49, 36, or 25, which also have high information value, relative to Tenenbaum's number concept model (Figure 1). Subjects' questions also had a high information value on the trial with the example number 37, and the trial with the example number 60. 1 ---------------------------------------------------------------------------_. 0.5 - -- -- -- --- - ----- ----- ----- -- - ----- ----- -o II IIII 11111 II. 1.1. .L .I. J I I I I I II I 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 Figure 1. Information value of sampling each number, in bits, given that the number 16 is consistent with the correct concept. 3.2 Multiple example, low uncertainty trials On these trials Tenenbaum's model gives a single concept very high posterior probability. When there is little or no information value in any question, infomax makes no particular predictions regarding which questions are best. Most subjects tested numbers that were consistent with the most likely concept, but not specifically given as examples. This behavior matches the confirmatory strategy. On the trial with the examples 81, 25,4, and 36, the model gave probability 1.00 to the concept square numbers. On this trial, the most commonly tested numbers were 49 (11 of 29 subjects) and 9 (4 of 29 subjects). No sample is expected to be informati ve on this trial, because overall uncertainty is so low. On the trial with the example numbers 60, 80, 10, and 30, the model gave probability .94 to the concept multiples of 10, and probability .06 to the concept multiples of 5. On this trial, infomax tested odd multiples of 5, such as 15, each of which had expected information gain of 0.3 bits. The confirmatory strategy tested non-example multiples of 10, such as 50, and had an information value of Obits. Most subjects (17 of 29) followed the confirmatory strategy; some subjects (5 of 29) followed the infomax strategy. 3.3 Interval trials It is desirable to consider situations in which: (1) the questions asked by the infomax strategy are different than the questions asked by the confirmatory strategy, and (2) the choice of questions matters, because some questions have high information value. Trials in which the correct concept is an interval of numbers provide such situations. Consider the trial with the example numbers 16, 23, 19, and 20. On this trial, and the other "interval" trials, the concept learning model is certain that the correct concept is of the form numbers between m and n, because the observed examples rule out all the other concepts. However, the model is not certain of the precise endpoints, such as whether the concept is numbers between 16 and 23, or numbers between 16 and 24, etc. Infomax tests numbers near to, but outside of, the range spanned by the examples, such as 14 or 26, in this example (See Figure 2 at left). What do subjects do? Two patterns of behavior, each observed on all three interval trials, can be distinguished. The first is to test numbers outside of, but near to, the range of observed examples. On the trial with example numbers between 16 and 23, a total of 15 of 29 subjects tested numbers between 10-15, or 24-30. This behavior is qualitatively similar to infomax. The second pattern of behavior, which is shown by about one third of the subjects, consists of testing (non-example) numbers within the range spanned by the observed examples. If one is certain that the concept at hand is an interval then asking about numbers within the range spanned by the observed examples provides no information (Figure 2 at left). Yet some subjects consistently ask about these numbers. Based on this surprising result, we went back to the results of Experiment 1, and reanalyzed the accuracy of Tenenbaum's model on trials in which the model gave high probability to interval concepts. We found that on such trials the model significantly deviated from the subjects' beliefs. In particular, subjects gave probability lower than one that non-example numbers within the range spanned by observed examples were consistent with the true concept. The model, however, gives all numbers within the range spanned by the examples probability 1. See Figure 2, at right, and note the difference between subjective probabilities (points) and the model' s estimate of these probabilities (solid line). We hypothesize that the apparent uninformativeness of the questions asked by subjects on these trials is due to imperfections in the current version of Tenenbaum's model, and are working to improve the model's descriptive accuracy, to test this hypothesis. 0.5 o 10 20 30 40 50 10 20 30 40 50 Figure 2. Information value, relative to Tenenbaum's model, of sampling each number, given the example numbers 16, 23, 19, and 20 (left). In this case the model is certain that the correct concept is some interval of numbers; thus, it is not informative to ask questions about numbers within the range spanned by that examples. At right, the probability that each number is consistent with the correct concept. Our subjects' mean probability rating is denoted with points (n = 81, from our first experiment). Tenenbaum's model's approximation of these probabilities is denoted with the solid line. 4 Discussion This paper presents exploratory work in progress that attempts to analyze active inference from the point of view of the rational approach to cognition (Anderson, 1990; Oaksford and Chater, 1994). First we performed a large scale replication of Tenenbaum's number concept experiment (Tenenbaum, 2000), in which subjects estimated the probability that each of several test numbers were consistent with the same concept as some example numbers. We found a correlation of 0.87 between our subjects' average probability estimates and the probabilities predicted by Tenenbaum' s model. We then extended Tenenbaum's experiment by allowing subjects to ask questions about the concepts at hand. Our goal was to evaluate the information value of the questions asked by subjects. We found that in some situations, a simple confirmatory strategy maximizes information gain. We also found that the current version of Tenenbaum's number concept model has significant imperfections, which limit its ability to estimate the informativeness of subjects' questions. We expect that modifications to Tenenbaum's model will enable info max to predict sampling behavior in the number concept domain. We are performing simulations to explore this point. We are also working to generalize the infomax analysis of active inference to more complex and natural problems. Acknowledgments We thank Josh Tenenbaum, Gedeon Deak, Jeff Elman, Iris Ginzburg, Craig McKenzie, and Terry Sejnowski for their ideas; and Kent Wu and Dan Bauer for their help in this research. The first author was partially supported by a Pew graduate fellowship during this research. References Anderson, J. R. (1990). The adaptive character of thought. New Jersey: Erlbaum. Fedorov, V. V. (1972). Theory of optimal experiments. New York: Academic Press. Klayman, J.; Ha, Y. (1987). Confirmation, disconfirmation, and information in hypothesis testing. Psychological Review, 94, 211-228. MacKay, D. J. C. (1992). Information-based objective functions for active data selection. Neural Computation, 4, 590-604. Oaksford, M.; Chater, N. (1994). A rational analysis of the selection task as optimal data selection. Psychological Review, 101, 608-631. Popper, K. R. (1959). The logic of scientific discovery. London: Hutchnison. Tenenbaum, J. B. (2000). Rules and similarity in concept learning. In Advances in Neural Information Processing Systems, 12, Solla, S. A., Leen, T. K., Mueller, K.R. (eds.), 59-65. Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology. 12, 129-140.
2000
21
1,819
The Interplay of Symbolic and Subsymbolic Processes in Anagram Problem Solving David B. Grimes and Michael C. Mozer Department of Computer Science and Institute of Cognitive Science University of Colorado, Boulder, CO 80309-0430 USA {grimes ,mo zer}@c s .co l orado .edu Abstract Although connectionist models have provided insights into the nature of perception and motor control, connectionist accounts of higher cognition seldom go beyond an implementation of traditional symbol-processing theories. We describe a connectionist constraint satisfaction model of how people solve anagram problems. The model exploits statistics of English orthography, but also addresses the interplay of sub symbolic and symbolic computation by a mechanism that extracts approximate symbolic representations (partial orderings of letters) from sub symbolic structures and injects the extracted representation back into the model to assist in the solution of the anagram. We show the computational benefit of this extraction-injection process and discuss its relationship to conscious mental processes and working memory. We also account for experimental data concerning the difficulty of anagram solution based on the orthographic structure of the anagram string and the target word. Historically, the mind has been viewed from two opposing computational perspectives. The symbolic perspective views the mind as a symbolic information processing engine. According to this perspective, cognition operates on representations that encode logical relationships among discrete symbolic elements, such as stacks and structured trees, and cognition involves basic operations such as means-ends analysis and best-first search. In contrast, the subsymbolic perspective views the mind as performing statistical inference, and involves basic operations such as constraint-satisfaction search. The data structures on which these operations take place are numerical vectors. In some domains of cognition, significant progress has been made through analysis from one computational perspective or the other. The thesis of our work is that many of these domains might be understood more completely by focusing on the interplay of subsymbolic and symbolic information processing. Consider the higher-cognitive domain of problem solving. At an abstract level of description, problem solving tasks can readily be formalized in terms of symbolic representations and operations. However, the neurobiological hardware that underlies human cognition appears to be subsymbolic-representations are noisy and graded, and the brain operates and adapts in a continuous fashion that is difficult to characterize in discrete symbolic terms. At some level-between the computational level of the task description and the implementation level of human neurobiology-the symbolic and subsymbolic accounts must come into contact with one another. We focus on this point of contact by proposing mechanisms by which symbolic representations can modulate sub symbolic processing, and mechanisms by which subsymbolic representations are made symbolic. We conjecture that these mechanisms can not only provide an account for the interplay of symbolic and sub symbolic processes in cognition, but that they form a sensible computational strategy that outperforms purely subsymbolic computation, and hence, symbolic reasoning makes sense from an evolutionary perspective. In this paper, we apply our approach to a high-level cognitive task, anagram problem solving. An anagram is a nonsense string of letters whose letters can be rearranged to form a word. For example, the solution to the anagram puzzle RYTEHO is THEORY. Anagram solving is a interesting task because it taps higher cognitive abilities and issues of awareness, it has a tractable state space, and interesting psychological data is available to model. 1 A Sub symbolic Computational Model We start by presenting a purely subsymbolic model of anagram processing. By subsymbolic, we mean that the model utilizes only English orthographic statistics and does not have access to an English lexicon. We will argue that this model proves insufficient to explain human performance on anagram problem solving. However, it is a key component of a hybrid symbolic-subsymbolic model we propose, and is thus described in detail. 1.1 Problem Representation A computational model of anagram processing must represent letter orderings. For example, the model must be capable of representing a solution such as <THEORY>, or any permutation of the letters such as <RYTEHO>. (The symbols "<" and ">" will be used to delimit the beginning and end of a string, respectively.) We adopted a representation of letter strings in which a string is encoded by the set of letter pairs (hereafter, bigrams) contained in the string; for example, the bigrams in <THEORY> are: <T, TH, HE, EO, OR, RY, and Y>. The delimiters < and > are treated as ordinary symbols of the alphabet. We capture letter pairings in a symbolic letter-ordering matrix, or symbolic ordering for short. Figure lea) shows the matrix, in which the rows indicate the first letter of the bigram, and the columns indicate the second. A cell of the matrix contains a value of I if the corresponding bigram is present in the string. (This matrix formalism and all procedures in the paper can be extended to handle strings with repeated letters, which we do not have space to discuss.) The matrix columns and rows can be thought of as consisting of all letters from A to z, along with the delimiters < and>. However, in the Figure we have omitted rows and columns corresponding to letters not present in the anagram. Similarly, we have omitted the < from the column space and the> from row space, as they could not by definition be part of any bigram. The seven bigrams indicated by the seven ones in the Figure uniquely specify the string THEORY. As we've described the matrix, cells contain the truth value of the proposition that a particular bigram appears in the string being represented. However, the cell values have an interesting alternative interpretation: as the probability that a particular bigram is present. Figure l(b) illustrates a matrix of this sort, which we call a subsymbolic letter ordering matrix, or subsymbolic ordering for short. In the Figure, the bigram TH occurs with probability 0.8. Although the symbolic orderings are obviously a subset of the sub symbolic orderings, the two representations play critically disparate roles in our model, and thus are treated as separate entities. To formally characterize symbolic and subsymbolic ordering matrices, we define a mask vector, /-£, having N = 28 elements, corresponding to the 26 letters of the alphabet plus the two delimiters. Element i of the mask, /-£i, is set to one if the corresponding letter appears in the anagram string and zero if it does not. In both the symbolic and sub symbolic orderings, the matrices are constrained such that elements in row i and column i must sum E H 0 R T Y > E H 0 R T Y > E H 0 R T Y > < 0 0 0 0 1 0 0 < 0 0 .2 0 .6 .2 0 < 0 0 0 0 1 0 0 E 0 0 1 0 0 0 0 E .2 0 .3 .3 .1 0 .1 E 0 0 0 0 0 0 0 H 1 0 0 0 0 0 0 H .6 0 .3 0 0 .1 0 H 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 .1 .2 0 .5 .1 0 .1 0 0 0 0 0 0 0 0 R 0 0 0 0 0 1 0 R 0 0 .1 0 .2 .7 0 R 0 0 0 0 0 1 0 T 0 1 0 0 0 0 0 T .1 .8 0 .1 0 0 0 T 0 1 0 0 0 0 0 Y 0 0 0 0 0 0 1 Y 0 0 .1 .1 0 0 .8 Y 0 0 0 0 0 0 0 (a) (b) (c) Figure 1: (a) A symbolic letter-ordering matrix for the string THEORY. (b) A subsymbolic letterordering matrix whose cells indicate the probabilities that particular bigrams are present in a letter string. (c) A symbolic partial letter-ordering matrix, formed from the symbolic ordering matrix by setting to zero a subset of the elements, which are highlighted in grey. The resulting matrix represents the partial ordering { <TH, RY }. to J.Li. If one extracts all rows and columns for which J.Li = 1 from a symbolic ordering, as we have done in Figure l(a), a permutation matrix is obtained. If one extracts all rows and columns for which J.Li = 1 from a sub symbolic ordering, as we have done in Figure l(b), the resulting matrix is known as doubly stochastic, because each row and column vector can each be interpreted as a probability distribution. 1.2 Constraint Satisfaction Network A simple computational model can be conceptualized by considering each cell in the subsymbolic ordering matrix to correspond to a standard connectionist unit, and to consider each cell value as the activity level of the unit. In this conceptualization, the goal of the connectionist network is to obtain a pattern of activity corresponding to the solution word, given the anagram. We wish for the model to rely solely on orthographic statistics of English, avoiding lexical knowledge at this stage. Our premise is that an interactive model-a model that allows for top-down lexical knowledge to come in contact with the bottom-up information about the anagram-would be too powerful; i.e., the model would be superhuman in its ability to identify lexical entries containing a target set of letters. Instead, we conjecture that a suitable model of human performance should be primarily bottom-up, attempting to order letters without the benefit of the lexicon. Of course, the task cannot be performed without a lexicon, but we defer discussion of the role of the lexicon until we first present the core connectionist component of the model. The connectionist model is driven by three constraints: (1) solutions should contain bigrams with high frequency in English, (2) solutions should contain trigrams with high frequency in English, and (3) solutions should contain bigrams that are consistent with the bigrams in the original anagram. The first two constraints attempt to obtain English-like strings. The third constraint is motivated by the observation that anagram solution time depends on the arrangement of letters in the original anagram (e.g., Mayzner & Tresselt, 1959). The three constraints are embodied by a constraint-satisfaction network with the following harmony function : H = L f3ijPij + W LTijkPijPjk + ~ LPijSij (1) lj ljk lj where Pij denotes the value of the cell corresponding to bigram ij, f3ij is monotonically related to the frequency of bigram ij in English, Tijk is monotonically related to the frequency of trigram ijk in English, Sij is 1 if the original anagram contained bigram ij or o otherwise, and W and ~ are model parameters that specify the relative weighting of the trigram and unchanged-ordering constraints, respectively. Anagram Constraint-Satisfaction Network E H 0 R T Y > < 0 0 2 06 2 0 E 20 J J 101 It 60 J 0010 o I 2 0 5 I 0 I ROO 10210 T I 8 0 1 0 0 0 Y 0 0 I I 0 0 B Subsymbolic Extraction Process < 0 0 0 0 I 00 EOO I 0000 HI 0 0 0 0 0 0 o 0 0 0 1 0 0 0 ROO 000 I 0 T 0 1 0 0 0 0 0 Y 0 0 0 0 0 0 I Symbolic Matrix Matrix Solved? (YIN) 4 Lexical Verification Figure 2: The Iterative Extraction-Injection Model The harmony function specifies a measure of goodness of a given matrix in terms of the degree to which the three sets of constraints are satisfied. Running the connectionist network corresponds to searching for a local optimum in the harmony function. The local optimum can be found by gradient ascent, i.e., defining a unit-update rule that moves uphill in harmony. Such a rule can be obtained via the derivative of the harmony function: A {}H I....l.Pij = E {}Pi; • Although the update rule ensures that harmony will increase over time, the network state may violate the conditions of the doubly stochastic matrix by allowing the Pij to take on values outside of [0, 1], or by failing to satisfy the row and column constraints. The procedure applied to enforce the row and column constraints involves renormalizing the activities after each harmony update to bring the activity pattern arbitrarily close to a doubly-stochastic matrix. The procedure, suggested by Sinkhorn (1964), involves alternating row and column normalizations (in our case to the values of the mask vector). Sinkhorn proved that this procedure will asymptotically converge on a doubly stochastic matrix. Note that the Sinkhorn normalization procedure must operate at a much finer time grain than the harmony updates, in order to ensure that the updates do not cause the state to wander from the space of doubly stochastic matrices. 2 The Iterative Extraction-Injection Model The constraint-satisfaction network we just described is inadequate as a model of human anagram problem solving for two principle reasons. First, the network output generally does not correspond to a symbolic ordering, and hence has no immediate interpretation as a letter string. Second, the network has no access to a lexicon so it cannot possibly determine if a candidate solution is a word. These two concerns are handled by introducing additional processing components to the model. The components-called extraction, verification, and injection-bring subsymbolic representations of the constraint-satisfaction network into contact with the symbolic realm. The extraction component converts a sub symbolic ordering-the output of the constraintsatisfaction network-into a symbolic ordering. This symbolic ordering serves as a candidate solution to the anagram. The verification component queries the lexicon to retrieve words that match or are very close to the candidate solution. If no lexical item is retrieved that can serve as a solution, the injection component feeds the candidate solution back into the constraint-satisfaction network in the form of a bias on subsequent processing, in exactly the same way that the original anagram did on the first iteration of constraint satisfaction. Figure 2 shows a high-level sketch of the complete model. The intuition behind this architecture is as follows. The symbolic ordering extracted on one iteration will serve to constrain the model's interpretation of the anagram on the next iteration. Consequently, the feedback forces the model down one path in a solution tree. When viewed from a high level, the model steps through a sequence of symbolic states. The transitions among symbolic states, however, are driven by the subsymbolic constraint-satisfaction network. To reflect the importance of the interplay between symbolic and subsymbolic processing, we call the architecture the iterative extraction-injection model. Before describing the extraction, verification, and injection components in detail, we emphasize one point about the role of the lexicon. The model makes a strong claim about the sort of knowledge used to guide the solution of anagrams. Lexical knowledge is used only for verification, not for generation of candidate solutions. The limited use of the lexicon restricts the computational capabilities of the model, but in a way that we conjecture corresponds to human limitations. 2.1 Symbolic Extraction The extraction component transforms the subsymbolic ordering matrix to an approximately equivalent symbolic ordering matrix. In essence, the extraction component treats the network activities as probabilities that pairs of letters will be joined, and samples a symbolic matrix from this probability distribution, subject to the restriction that each letter can precede or follow at most one other letter. If sub symbolic matrix element Pij has a value close to 1, then it is clear that bigram ij should be included in the symbolic ordering. However, if a row or column of a sub symbolic ordering matrix is close to uniform, the selection of a bigram in that row or column will be somewhat arbitrary. Consequently, we endow the model with the ability to select only some bigrams and leave other letter pairings unspecified. Thus, we allow the extraction component to consider symbolic partial orderings-i.e., a subset of the letter pairings in a complete ordering. For example, { <TH, RY } is a partial ordering that specifies that the T and H belong together in sequence at the beginning of the word, and the R should precede the Y, but does not specify the relation of these letter clusters to one another or to other letters of the anagram. Formally, a symbolic partial ordering matrix is a binary matrix in which the row and columns sum to values less than or equal to the corresponding mask value. A symbolic partial ordering can be formed by setting to zero some elements of a symbolic ordering (Figure l(c». In the context of this task, a sub symbolic ordering is best viewed as a set of parameters specifying a distribution over a space P of all possible symbolic partial ordering matrices. Rather than explicitly generating and assigning probabilities to each element in P, our approach samples from the distribution specified by the subsymbolic ordering using Markov Chain Monte Carlo (Neal, 1993). Our MCMC method obtains samples consistent with the bigram probabilities Pij and the row and column constraints, J-Lj. 2.2 Lexical Verification Lexical verification involves consulting the lexicon to identify and validate candidate solutions. The extracted symbolic partial ordering is fed into the lexical verification component to identify a set of words, each of which is consistent with the partial ordering. By consistent, we mean the word contains all of the bigrams in the partial ordering. This set of words '0 ~ 0.4 j ..... 0.2 Length 3 Length 4 - Length 5 - Length 6 Length 7 ~L----2~0----~ 40-----6~ O ----~ 80----~ 100 Number of iterations gO.8 '§ 51 '" ~O.6 '" '0 .~0.4 :g e " 0..0.2 I I , , Extraction-Injection No Feedback - Random Feedback - - - Continuous Feedback 50 100 Number of iterations 150 Figure 3: (a) Probability of finding solution for different word lengths as a function of number of iterations. (b) Convergence of the extraction-injection model and variants of the feedback mechanism. is then checked to see if any word contains the same letters as the anagram. If so, the lexical verifier returns that the problem is solved, Otherwise, the lexical verifier indicates failure. Because the list of consistent words can be extremely large, and recalling and processing a large number of candidates seems implausible, we limit the size of the consistent set by introducing a recall parameter 'fJ that controls the maximum size of the consistent set. If the actual number of consistent words is larger, a random sample of size 'fJ is retrieved. 2.3 Injection When the lexical verification component fails, the symbolic partial ordering is injected into the constraint-satisfaction network, replacing the letter ordering of the original anagram, and a new processing iteration begins. Were it not for new bias injected into the constraint satisfaction network, the constraint-satisfaction network would produce the same output as on the previous iteration, and the model would likely become stuck without finding a solution. In our experiments, we show that injecting the symbolic partial ordering allows the model to arrive at a solution more rapidly than other sorts of feedback. 3 Results and Discussion Through simulation of our architecture we modeled several basic findings concerning human anagram problem solving. In our simulations, we define the model solution time to be the number of extraction-injection iterations before the solution word is identified. Figure 3(a) shows the probability of the model finding the a solution as a function of the number of iterations the model is allowed to run and the number of letters in the word set. The data set consists of 40 examples for each of five different word lengths. The most striking result is that the probability of finding a solution increases monotonically over time. It is also interesting to note that the model's asymptotic accuracy is 100%, indicating that the model is computationally sufficient to perform the task. Of more significance is the fact that the model exhibits the word length effect as reported in Sargent (1940), that is, longer words take more time to solve. Our model can explain other experimental results on anagram problem solving. Mayzner and Tresselt (1958) found that subjects were faster to find solutions composed of high frequency bigrams than solutions composed of low frequency bigrams. For example, SHIN contains higher frequency bigrams than HYMN. The iterative extraction-injection model reproduced this effect in the solution time to two classes of five five-letter words. Each word was presented 30 times to obtain a distribution of solution times. A mean of 5.3 iterations was required for solutions composed of high frequency bigrams, compared to a mean of 21.2 iterations for solutions composed of low frequency bigrams. The difference is statistically reliable (F(l, 8) = 30.3,p < .001). It is not surprising that the model produces this result, as the constraint-satisfaction network attempts to generate high frequency pairings of letters. Mayzner and Tresselt (1959) found that subjects also are faster to solve an anagram if the anagram is composed of low frequency bigrams. For example, RCDA might be recognized as CARD more readily than would DACR. Our model reproduces this result as well. We tested the model with 25 four-letter target words whose letters could be rearranged to form anagrams with either low or high bigram frequency; each target word was presented 30 times. The mean solution time for low bigram-frequency anagrams was 21.4, versus 27.6 for high bigram-frequency anagrams. This difference is statistically reliable (F(1,24) = 41.4, p < .001). The difference is explained by the model's initial bias to search for solutions containing bigrams in the anagram, plus the fact that the model has a harder time pulling apart bigrams with high frequency. Simulation results to date have focused on the computational properties of the model, with the goal of showing that the iterative extraction-injection process leads to efficient solution times. The experiments involve testing performance of models with some aspect of the iterative extraction-injection model modified. Three such variants were tested: 1) the feedback connection was removed, 2) random symbolic partial orderings were fed-back, and 3) sub symbolic partial orderings were fed-back. The experiment consisted of 125 words taken from Kucera and Francis (1967) corpus, which was also used for bigram and trigram frequencies. The median of25 solution times for each word/model was used to compute the mean solution time for the original, no feedback, random feedback, and continuous feedback: 13.43, 41.88,74.91, 43.17. The key result is that the iterative extraction-injection model was reliably 3-5 faster than the variants, as respective F(l, l24,p < 0.001) scores were 87.8, 154.3, 99.1. Figure 3(b) shows the probability that each of these four models found the solution at a given time. Although our investigation of this architecture is just beginning, we have shown that the model can explain some fundamental behavioral data, and that surprising computational power arises from the interplay of symbolic and subsymbolic information processing. Acknowledgments This work benefited from the initial explorations and ideas of Tor Mohling. This research was supported by Grant 97-18 from the McDonnell-Pew Program in Cognitive Neuroscience, and by NSF award IBN-9873492. References Kucera, H. & Francis, W. N. (1967). Computational analysis of present-day American English. Providence, RI: Brown University Press. Mayzner, M. S. & TresseIt, M. E. (1958). Anagram solution times: A function of letter and word frequency. Journal of Experimental Psychology, 56, 376-379. Mayzner, M. S. & Tresselt, M. E. (1959). Anagram solution times: A function of transitional probabilities. Journal of Experimental Psychology, 63, 510-513. Neal, R. M. (1993). Probabilistic inference using Markov Chain Monte Carlo Methods. Technical Report CRG-TR-93-1, Dept. of Computer Science, University of Toronto. Sargent, S. Stansfeld (1940). Thinking Processes at Various Levels of Difficulty. Archives of Psychology 249. New York. Sinkhorn, Richard (1964). A Relationship Between Arbitrary Positive Matrices and Doubly Stochastic Matrices. Annals of Mathematical Statistics, Vol. 35, No. 2. pp. 876-879.
2000
22
1,820
             ! ! " # !$ &%  ')( * + ! ,.-0/21 3547678:9;-=</?>)@;AB@DCE1 35FGFH1 3JILKM4$IONQPSR IT676B-=1 35I:4VU=<T<TWX<T/ YZ\[]_^ `a]cbO]_deGfgihij:k+lT]m^ `and$k2Z:j!opd7bOgqk2rQstfuvlObO]mk]m^ `afZ w Z:^ `and7gm[x^ ` ]xy?f+eQz{jO^ `aZO|ObLg_}~ € f‚gcg_dB[]iƒp^ `ar5rG„…z{jT^ `†ZT|:bOgc}‚~ztƒ$‡‰ˆTŠ‹ ŒaŽT‘“’Q•”–”†H—B˜v™ši›mœJ—ŒBTŽ2žŸ™7›_›‘ T›¢¡2šiœ—BH•›qœ‚J—ŒBTŽ £T¤H¤“¥  ¦ §‚§ —B¨…Œ  ›qœ  —‚Œ  TŽ ©ªi«T¬O­:®D¯X¬ h°u±kB² ` fg;lOgcf‚|:r“d7u*e–f‚g´³‚d7gcZOd7rµ|Xk[¶d7j$l:g_d7jO^ `†· ]_fgq[p¸G[¶b · ~±k‚[Q¹ObLlOlXfg¶]{º´d · µ ]mfgi»&k · ~L^ ` Z:dB[ik+ZXj!¼k2bX[c[¶^ ` k+Z½lLg_f · dB[_[xd7[_¾Ÿ^ ` [Ÿ]_~\kB]i]c~:d.k2uvfb:Z]f+e · fu.µ l:bO]qkB]_^ `f‚Z&gcdB¿b:^ ` g_dBjÀ]qfVÁXZ\j!]_~:d±[¶fr“bO]_^ ` fZÂ[ · k+r“dB[k[à ¸•Ä0ÅB¾ „0Æp~:d7g_d Ä ^ `“[ ]m~OdZObOu.|Xd7gŸf2e=]cgmk+^ ` Z:^ ` Z:}.dqÇXk+uvlOrJd7[¢È;É)d[¶~Of+ư]q~:kB]ik+Z½k+l:lOg_fÇO^ `uVkB]c^ ` f‚Z ]mfÊ]m~Odpd7^ ` } d7ZXjTd · fuvlXfT[¶^ ` ]_^ `f‚Z$f2e\]_~:d ¼ gqk2u u±kB]_gc^ ` Ç · k+Zv|Xd · fuvl:bO]cdBj$| y ] ~ d o yO[] g f ` ËauÌu d¢] ~ fOj͸–Æ ~:^ `·q~Â^ ` [bX[¶dj e f g ] ~ d Z b u d g_^ `†· k r [¶f r bO] ^ ` f Z f e d7^ ` }d7Z:lOg_f|:r“d7uV[c¾mȁÎÏ~L^ ` [^ ` [k · ~:^ ` d7nd7j&|Ty · k+g_g_yO^ ` ZL}?fbT].k2Z&d7^ ` }d7ZXjTd · fu.µ lXfT[¶^ `a]c^ ` f‚ZÐfZMk![xuVk2r5r“d7g‰[xyO[]md¢uÑf2ei[x^ ` Ò7dVÓÕÔ Ä „Dk2ZXjÖ]m~Od7ZÐd¢ÇOlXk2Z\jT^ `ZO} ]m~Od$gcd7[¶b:r]¢[Ï|Lk · ³Vb:lÀ]_f Ä jO^ `auvd7Z\[¶^ ` fZX[¢È;Ο~:d · f‚ulObO]mkB]q^ ` fZXk2r · fuvlOrJdqÇOµ ^ ` ] y f2eŸkvl:gcdBjT^ ` · ]mfgibX[x^ ` Z } ]_~L^ `[k+l:l:g_f7ÇL^ ` uVkB]c^ `f‚Z^ `†[ ÃV¸ Ó!× Ä0¾ È É d‰gcd7lXfg¶] d¢ÇOlXd7gc^ ` uvd7Z‚]¢[´f‚Z±]m~Od$w ¹:Øt¹ k2ZXj±k2|\k2r“fZ:dj:k]mk.[¶d¢]q[Ÿk2ZXjÀ[¶~Of+ư]_~\kB]pƟd · k+Z![¶dm]ÊÓÚÙ•Ù Ä Æp^ ` ]m~OfbO]k+ZTy½[x^ ` }‚Z:^ ` Á · k2ZT]ijOd · g_dBk[¶d^ ` Z?]q~Od.k ·¢· b:gqk · y±f2e ]q~Od.[¶f‚r“bO]c^ ` f‚Z0È YZÛg_d · d7Z‚]±y‚dk2gq[vu$b · ~SkB]¶]cd7Z‚]q^ ` f‚ZÛ~Xk[v|Xd7d¢Z°l\k2^ `jÜ]qfݳd7g_ZOd7rµ|Xk[¶d7j · rJk[_[¶^ ` Á\d7gm[v[xb · ~Sk‚[ ¹ bOl:lXfg¶] º d · ]_fgp»!k · ~O^ `†ZOd[ ¸H¹Oº »&[ ¾¸•º k2lLZO^ `³X„…‡BÞނ ¾ „ ¼ k2b\[_[¶^ `k2ZÀlOg_f · dB[_[ · r•k[_[¶^ ` Á\dßgq[ ¸ d2È } È [¶d7dÉ ^ `r“r“^ ` k u [Ÿà*áÏk g |\d g „ ‡BÞނ⠾tk Z j±[¶l r“^ `aZ d u d¢] ~ fOjL[¸–ÉÝk ~ |\kO„ ‡ނÞã ¾ È;äÊZ df e ] ~ d u k ^ `aZ jOgmk7Æp|\k · ³:[;f+eå³d7g_ZOd7rµ|\k‚[¶d7j · rJk‚[c[¶^ ` ÁXd7gq[;^ ` [Q]m~LkB]Ÿ]_~:d · f‚uvlLbO]mkB]c^ ` fZ:k2r · f‚uvlLr“dßÇO^ ` ]¶yg_dB¿b:^ ` g_dj ]cfÁ\ZXj.]_~Ld[¶fr“bO]_^ `afZ?[ · k2r“dB[´k‚[tà ¸HÄ0Å7¾ „TÆp~Od7gcd Ä ^ `[Q]q~OdZTb:u$|Xd7gtf2e…]_gqk2^ `aZ:^ `aZL}‰d¢Ç:k+uvl:r“dB[¢È=YZ ]c~:^ `[Ÿl:k2l\d7g{ƟdilLg_d7[¶d7Z]pk$g_dBjTb · d7jTµgqk2ZO³Vk+lOl:g_fÇO^ ` uVkB]q^ ` f‚Z.]cf$]_~:d¼gqk2uæuVkB]cg_^ `aÇÀç݄T}^ ` nO^ ` Z:} gc^ ` [¶dv]cf&ÃV¸–Ó × Ä ¾ · f u l:bT]¢kB] ^ ` f Z k r{· f u l r d ÇO^ ` ]¶y ÈΟ~:^ ` [.k+l:l g f ÇO^ `u kB] ^ ` f ZÐèé^ ` [f|O]qk ^ ` Z dBjÖ|Oy gqk2Z\jTf‚uvr5y · ~:fTfê[x^ ` ZO}vÓëg_fBÆi[qì · fr“b:uvZ\[{f+e;çé¸GÆp^ ` ]_~Lf‚bO]igcd7l:rJk · dßud7Z‚]¢¾q„Xk2Z\jV]c~:d7Z&[¶dq]_]c^ ` Z:} èîí í ç±ïiðiç ð ñóò ð ç±ðÝï\„=Æp~:d7g_d±ç?ïpðæ^ ` []_~:d İô Óõ|:r“f · ³&f+ep]_~:d±f‚gc^ ` }‚^ ` ZLk2rQu±kB]mgc^ ` ÇÝç܄ k2ZXj!Æp^ ` ]q~Töß[x^ ` uv^ ` r•k ö gjTd¢Á¶ö ZO^ `a]c^ ` f‚Z\[ie•fg]c~Gö dvf2]c~:d7g|Or5f · ³:[¢ÈÉÂd.ÁXZ\j]_~Xk]‰^ `aZÖl:gmk · ]_^ ` · d.Ɵd · k2Z [¶d¢]iÓ Ù•ÙÖÄ Æp^ ` ]_~:fbO]k+ZTy?[¶^ `}‚ZO^ `aÁ · k2Z]jTd · g_d7k[¶d^ `Z?]c~:d$k ·7· bOgqk · yVf+e0]m~Od.[¶fr“bO]_^ `afZ0È YZS[¶d · ]m^ `afZ÷‡!f+ei]q~Od!lXk2l\d7gvƟd!jO^ `[ · b\[_[$]m~Od]q~Od7f‚gcyÜf2ei]q~Od!uvd¢]_~:fOj…ȹOd · ]c^ `f‚ZۈÂ}^ ` n‚dB[ d¢ÇOl\d7g_^ ` uvd7Z]mk+r0g_d7[¶b:r]¢[¢„:k2Z\j?Ɵd · f‚Z · rJbLjOdÆp^ ` ]_~ÖkvjT^ `“[ · bX[c[¶^ ` fZ½^ `aZÖ[¶d · ]c^ `fZ½øOÈ ù úÂûÊüåý ­Tþ ý;ÿ ¬ ûiüþÊ«‚¬O­ ý  æü ¬ ûÊý @ -=<  =4¢67/+8  JKÕKÜ<‚6B-=80P8:/ÀI 9=9 /28=1 3JKÜIê61 3JN W <T1 3 W <ONDN26B1 3J8XN=4 YZ ] ~ di] ~ d7f g y±f e=³ d g_Z d r…u k ·_~:^ `aZ dB[{Ɵd · f Z [ ^ ` jOd gi· f2n2k g_^ ` k ZX· d ³ d g_Z d r [ Ž ¸τQ¾ ÈtΟ~ dB[¶d · k Z |\dŸg_d7rJkB]qd7j]_f$k2Z.dqÇ:l:k+ZX[¶^ ` fZ.^ ` ZT]_f$kpe•dBk]_b:gcdp[¶l:k · dŸf+eójO^ ` uvd7ZL[¶^ ` fZ ¸5]¶yTl:^ `· k2r“r5y ^ ` [=r•k2gc}‚d¢g  ! " # $ % & ' ( ) *         !"#$ &%(') *+ ,"%/.10 '32 254 67 8 9:7 8 % ' 9"7 8 0 ' . -; ' 7 8 <=> ?  A@1 CBEDGFH. 6 > I 6"J I AK IML A *N K O KPQR S *T 9:> . 9 J .KUKUUV A1 * W W W W S X K O K Y #$  Z[S X" K@$*@ ?   V\Z K@1 KQ]   , ."^ "_` ,:0).$% ' 9 7 8 % '*a % '^b % 2 2c6 7 89 7 8 0 ' . -ed ' ?  A@1 a % ' K $ K/1 @f"f  QR  gh  K"^  ^gij$ k   P K#1Z@V%TUl[ K  O K Y #m   Z@ niZ@1OZQe.(h*Co 9]7 8 % ' 9Sp 8 % 'Ya % 'b % 2 2rqK7 8 p 8 UHls @_t  C*$ C1   K O Ku S#1  Zi Kv _   O  P Ahw   i Qx ky*% > .KUKUAU.$%jz_{X|@ a % ' . ? @1 SQxZ#A !    KO @  Q PZ @ a %('}fgw     @1 ~#K Q *P @ O )Cf $    ;V€ ,]0 .%3 ' 9 7 8 %‚ '‚ƒ ƒ 6"7 8 9 7 8 0[' U -e„ ' …  <=> l[ ni@SZOZSQx  gw)1 C K O Ku S#1 ~T$@Q†* *V  wS C Kk @1 ~#KQ3#AZS‡@$ ˆ >3‰ z  > 9 7 8 %  ' 9p 8 %  '3ƒ ƒ q 7 8p 8 U3Š‚v _   „ k1  P*$ K3 C_@  tw A  O S@f Qx A z < = ‹ ŒŽ z  z1 2   z‘  z^ -u’ ' 2 . ?  A@1 Œ  z   …“”…i• @–—*$@  t ?  $˜ KQR Kk KZ$ Œ 7  8 p z 8  2 2 Œ % 7 8 .% p 8 ' Y@ ™.‡š†› 2 2 ; .KUAUKU:. … .   z kœž zKŸz   #Qx k˜@$ Z@—Q3 ‘  z   i  OZQ(C*$@  t ?   K@1 ~ K 6 >  z  Ic6 J  z  I UKUKU 6 z  z  IcL U¡ e ? )QR O! V% p 8 Y@‚0˜  V *v "*1  Z „ " k—*$# W W W S  /O  "‡) Kv *$  Z ’ ? @1@  P T_) VYQxQR ?  SOk @1*t  —*  " U U 9[8 % 8 ' ƒž¢ …   z 638 ƒ¤£ 6 7 ¥ z -e¦ ' 7 p ƒ p 8 7 8 . 7 ƒ U ‹ … § QR OOZ SOTS K )f"#\ k ~Z1& $X *v "_   -¨„ ' ? )f©A    /ª)gS‡@«¬@1Kt  C*1 ~ 1  ™|i A  O  | "#$   ^ K . UOUR.­[\ K@K. ;*®¯¯ . #" A@ „ ' 9 7 8 0[' ƒ ƒ¤°  …  € , 0/.$%  '    z 7 8  2 2 °  … ±² U ³ 7 ¥ z  . -e´ ' 6 7 8 z  <> ‹ 6 7 8 z = ? S @1 ±µ²  3 !PZ K#$@ -¨,:% > .10 ' .KUUKU:. ,]%jz_.0 ''^¶ " C³ 7  8 z1  ( V™|w#$ZQx ·   z  U ª)$ }$*[ Kv *$   ´  j  A  #KQ k 1#KQR   OVu#$@ '  *v "*1  Z ’ U ;  C¸S#«QR\Z  A¡QeU -‡; ®Z®¹ ' ?   #1  *#A@  f *: 3@º  K#$  X") ? "  j%wZ)1 3™u$  ~O A P K#1Z@  w ** @ #A U »"¼†½ ¾·¿*À ÁxÂÔÄ_ÅÆiÇÈ¿AÄKÉÊ Ë ÌxÍÎ͘ƩÄKÅÏÊÑИÄ*ÊÓÒ ÔÔ¡ÉÊÕµÀ ÁÖÍ×Ò ÄKÆiÄ_ÅÆhØ·ÉÒÍÙÍ˜Ò ÄAÉÀ ÁRÕ lÑÚKP  ˜ k K@  #KQ)  "‡$fS  QR    * wŽ  QxQ~m#$Z  1     O˜    —#AZkZH @$#1  #$ 1 @1 K QY# — • @Û—*@1  t Œ f g ŒÝܞÞ]ß ª) *Qu. ; ®®¹ ' ?  K@ Þ   h^CQRQ3"  1  PZ #  ‡A  /#A QÖQ * XàÖá â á ãeãä$åjæAçèã‡é_å U[ê!  A  k—# N © ?    #A   $*     Q #A©‡$ ~‚1  /$   ~OZ A" K#Ak"  $  Zw Œ Œ 2 2 (ëj‘3낐 ë ¶ -‡¯ ' ?  @  ë ~w @     @  Q . ‘3ë 2 2 RO 6 7 ¥ ë  ' 7 8 . 6 >  ë  Iì6 J  ë  I AK IíL Uݪ  ? . W W W |Z@îZ a Dðï f  Q†  œñ¡ò Ÿó u@1Zô öõ"@$‡a #$Qx k"˜  ë  ÷QR A      ! #"%$'&)(+*,$(-)./.10324/ 6578*3$:9<;>=@?BADC-E E#F+;>=?1HGI.KJ . 0 2 4/  5 L*   2 (/M 2 NO* ,1  MQP 6( > 0 $SR   $T CVU M $@W$  /  (>X Y[Z]\@^Z X`_8ab^3c8deX/c8\@fg\1hQ_8\_8a i8j[Z]j kmlenoqp  ( r& ( A`$ 5 2)*   s 8*[$  6(  s  0  $* C 2 NtRW CM u N *,/$+$  v$(/$L&2w5x. 2 M  6*  2v(y  M  s    T )A T $w U`M   (/ S*,/$S)./.10324/ 6578*3 z2w(VE EHF{;|=?%RW  T}T 03$8L* T C 03$8 U &g$~*[,$O&2v5'. U *8*[  2w(  T &2 M * M 2 N 5Q ( C Pw$03(1$ T 5'$g*,/2 M €2)*3$q*, 8*‚*3,/ zM 5Q8J *30  4b  M (/2)* M   ( wU T  0€ U $~*32I*,/$W=?*$035+tƒ28Rq$ s $0w&2w5x. U *  ( *3,1$„$ z $( $&2w5x. 2 M   *[  2v(   M †…x ]‡‰ˆ b2w. $0L* 62w(SŠ„,$0$r 03$75x$*3,/2 M *32-&2v5x. U *$Q*,/$Q‹/0 M *OŒy$  $( s  T U $ M )(/ $  $( s $8&*2w0 M 2 N 9‚A U *b*,$ 607 s $0[ $x0 U ((/  ( *  5x$ M )0$ M   (1  ‹@&)(K* T C A $ T 2 RŽ…7 ]‡ˆ8 2w( T CQ  N ŒB]%‡ ƒW2 Rq$ s $0@*3,/$7 CM *0‘v5’*$8&3,1(/  “ U $Q$ M &g0  A $8-)A`2 s $x&)(-A $ U`M $8S*32S&2w5x. U *[$Q)(”).J ./024/  5QL*3  2w(H*2I*3,1$„$ 6 w$( s  T U $ M )(`b$  v$( s $8&*2v0 M R•$q0$8“ U   03$w u N R„$ U/M $I M–U A M $*t2 N *,$ * 0    (  ( 8*O2 N M—  ˜ $I™  |šœ› ‡B*32:& 0 $8L*3$q* , $ 5 8* 03  4 $   $ ( . 0 2wAT}$ 5 2)N$“ U 8*   2 (r wR•$ & (r*,/$(S).1.03284  578*3$e*3,/$b$  $( N U (`&*  2v( M 8*O TT ‡%. 2w  (K* MWU1M   ( $L“ U 8*3  2w(SžqŸ$*I*,/  M T2 R J!0  (P ).1. 0 2 4 z5 8*   2 ( *32 9 A`$:$ ( 2)*$8+A C+   ¢¡H£q¡#F  ¤  ¥ ¦ § ¥ ¨© !ª ¥ ¨}©  ª ¥ ¨©  –F  «¬ R , $ 0 $ § ¥ ­ © x ( Vª ¥ ­ © x 0 $Q* , $ ICM * 0 ‘ 5 )./. 0 2 4  5 8*   2 ( M 2)Ne* , $S$  z $ (s  T U $ M®Ls $8&[*32 0[M  ¥ ­ ©  )( B¯ ¥ ­ ©  2 N *3,/$#‡°B‡%578*30  4‚± C  ./. T C   ( $8“ U L*3  2w( M žx (/S²:RW  *3,QŒ › š ›V‡ ³ )(`{(/2)*  ( *3, L*´ ¥ ¨   µ ¥ ¨ ©  I¶  ¸·vŒ ( S*3,1$:‹/0 M *WŒy&[2 T U 5x( M 2 N E )(` E  ©  &g2w  ( &[ K$8 [/Rq$b)003  s $H8*q*[,$:)../0284  5QL* 62w( N 2v05 U T )$   § ­ © t¹   º–»x¼ ‡  ­½  ¶  · Œ ¿¾v ¥ ¥   š ª ­ © t¹   º–» ¼ šÁÀ  ·  9 © ½ ¯ ­½  ¶  · Œ ¿Âv ¥ ‡  ½ à ¥  ¦ ¥ RW,1$g0${¯ ¥ ­}½    M *[,$%¶*3,|$  $( s $8&3*[2w0x2 N *3,/$ š ° š $  $g(1./032wA T $5Ä (/Å9 © ½   M *,$ à )./.02v./0  8*[$H‡y° šÆM–U A578*[03  4S2 N 9yOI2 *3$b*3,`8*I*3,1$:$(v*[03 6$ M 2 N ª ¥ ­ © W8*e*3,/$ š . 2w  (K* M )03$qÇ 6U`M *I03$ M & T $L s $0 M   2v( M 2 N ¯ ¥ ­ ½   " $:&)(B*3,1$03$ N 2w03$b&2v5'. U *$: ({)../0284  5QL*  2w(r*2 *,/$:E E#FB;È=@?Q 6(B*[  5x$ … š ˆ~;SŒ š ‡   … šSÉ ‡‰ U/M   ( :*,$:WC M *[03‘w5Ê*3$8&,/( “ U $) ˴̉ÍyÎtÏKÐ8ÑÒ ÍÈÓ`ÔHÐL̉ÍÈÏՉՐÒ)ÓKÖ× Ø}ÙyÏÐ× ØÓ Î¢Ó Ô 9 Ô!Ó/Ò Œ   š  " $B&2v( M  K$0I*3,1$r“ U  T   * C Ú 2 N‰* , $: .. 0 2 4  5 8*   2 (B9 N]2 0I9 8* ]  * , $ š . 2   ( * MWU1M $8QNÛ2 0 * , $b$   $ ( $&2 5 . 2 M–  *   2 ( )(      *[,$O‡bÜ š 2 *,$0~. 2w  (K* M ‚Ÿ$*q9’A`$O./ 0–*  *  2v($Lx  (v*[2bA T 2&3P M 9 ½”½ 9 ©Ý ½”½   à à 9 ½ F © ½ )(1È9 © Ý ½ © Ý ½ %Þ T U/ v   ( *3,1$ß ./.032L4  5Q8*  2v(|  #  (v*2    ¡ £ ¡ FÅ  *x  M Ã Ý Ã $L M C'*32 M ,/2)R|*3, L*     9 © ½ 9 ½ Ý  ½ 9 ½ ©  ·8à Ã Ã Ã Ú Ú Ú á U 0–*3,1$0•Rq$ M $$I*3,`8*W9 ½”½  |9 ½-½ •9 ½ ©KÝ ½  |9 ½ ©DÝ ½ •9 ©KÝ ½-½  |9 ©KÝ ½½ A U * à à à à à à *, 8*I  © Ý ½ © Ý ½  Å9 © Ý ½”½ 9 ½ Ý  ½ 9 ½ © Ý ½ ‚Šq,/$HK 6â@$03$( &[$#9 © Ý ½ © Ý ½ Ü   © Ý ½ © Ý ½ à à à à à à   M   ( N v&[*b*3,/$†ã ^äDåYS^c8dOX`a}f[dQf\/h 2 N 9 ½”½  u *: M $8 M–C *[2 M ,128RÁ*[,/8*x$8“ U L*3  2w(Å·8à+  M à $4/w&[*W  (r*,/$b& M $e*3,`8*I9æ,  M 0[)(1P š )(/7*3, L* š T   (/$L)0 T C J!  ( $.`$( K$(v*e&2 T U 5x( M )0$ &3,/2 M $(@ ç èé ê ÍKëÏÐÍìVí:Ó/Ò8î " $q,/ s $€03$L&g$(K* T C A $8&2w5x$qRW)0$€2 N *,/$~R„2w03Pe2 N á 03  $˜$ fht_La‰ · Âv ¾ RW,/2 U M $qWRq$  ,w*3$L 0[)(`K2v5 M–U A M  5x. T   ( 2 N *[,$•02)R M  (/:&2 T U 5'( M 2 N W03$8&*g ( vU T )0‰578*30  4O*[2O&2v5'. U *3$q)( )./.0284  5QL*3  2w(x*32:*,/$HïKð#ñ¢2 N *3,`8*q5Q8*[03  4@•G M  M . $&   T & M $O (+)./.0284  5QL*3$I$  $(J $&g2w5x.`2 M   *[  2w({2 N Bï/Þ~ñò5QL*303  4-&)({A`$x2vAK*) 6(1$xó U 0ORq2w03P   s $ M )(% 6(`K$. $(/$(w* $g0  s L*3  2w(r2 N *,/$O0$ M–U T *b ]RW 6*,/2 U *q*3,1$ORq$  v,w*3  ( D  U M   (/ xWC M *[03‘w5Á*,$2w03Cw)(`Q ./. T   $ M   *x*2 P $ 03( $T 5 w& ,1  ( $ MÈG T M 2ï 5 2vT}- ( Vï1& , ‘wT P 2w.N mô àwàvà b,  s $ 0 $&g$ ( *3T C $ M & 0  A $  M . )0 M $ 03$$8 C 578*30  4”).1.03284  578*3  2w({5x$*,/2 u *#* U 03( M 2 U *#*3,`8*O*[,$ N 2v05œ2 N 2 U 0        !"# $#%  "&'(")"*)'(+,"-  .  /% 0" 1'23(4+4" "5+76#&1 2-8#9:1;%9;"<"9="> '29?( 2@*A B (.'( 3(#=C=%/"->9>D'*) 2#"1)&1E FHG8#>#.  (55IJ% G&H2"-#98>HK7 25;( 2& /A B 5ML"1E ?G9 'MNPOQ(Q(QR "83(!S> 3(  *  ;    *.8  2(&T " U "   #   #WV "   0X?* K (;  Y    '(   " # G9Z&9>H;2* [\52+<]^; X    (IJA4_<"DD *H"488`(   . 2#"1KP X a ("1 #% G&.7"05KPE&9F(=+<]b X    9TKcId#KPE84( 2 !&F>9F 2 ; (8#A e8f@g hjik#lHim m nop q r\stiluwv:i1x kkq r@iyzm noj{ukk|{}Pikkq r~{ilq r@o8yziy €nu‚nukkq r@o9y _<"."1 ƒ &; G8G93(. W8  * 1@ G\.5!+4 2 ' .K X # # " #%    &9  'W„F&8    ; UNb„W…URa  L &18  c†   "  4 '   "  A ‡ Z08#  Y F"#F(5„ … #'0 !15;S> ˆ‰# jA „W…  '   [ W  ƒ &   Š&9jW>  3 " 4>*‹ N IzŒ2jŽ R) ‘ + "    ‘   " U  1  ' '(#;#AH_<"-’<*(W $(W "#“'( [3(“G*)”/NP•4RZ  j–a—N˜•<RW+4"™—cN^•<RF $7;" 3(;(TKj'"-š5+4 "-#@# 2(#`› N •cœ  %• R AUž4Ÿ(; 'W"F„W;   ™;  " 1 ¡ >*‹;# ¢G*"<  ; (DI“%"4>*‹E £ED> *FG9`03(&9> '4"-¤5¥ ¥¦§¨©#ª « ¥©¬.¨­@® N >%1A 'A … <˜A%j¯°(° O(R A A = ²± ¯M³ ‘<´Jµ¢¶  Ž ŒJ· µ – µW¸ ¹jº · µ – ‘#» N0¯¯R  +4"1 "5 4¼ N½8¾ š R AU_<"D 2# 2*-#‹¿ 4¼ N š ½9R A _`"1:9‹ [(.KŸ(.")„D&> t2#@(0 ˆ/ J 2¿™G85#( 2&; .;(#G@D+<*(%G&W  (:G1> :(5"12À ®Á9­@®ÂîÁÁ9©;¥#Ä(ş¬™®Æ^Ř¥Ç5"9 G9#5(8>H+4" ":+<( X <+<‰ 5;(;  N A 'A8> ‡   ™48!’4G8# N ¯°(°(È R>R A _`"1Z |E"5 ¿ ; 3(.5 ƒ &1 UK˜(¿("5 ;  (:"2>(& 5KTH * ‹ ÉK "KP N Ž7ŒMÊ N Iˌ  Ž R>R “  ÊÍÌ!ŒÏÎ N ¯ OR :G9#( 2&/%j+4"#2Ìz  N IЌ  Ž R ÒÑ>ÓÔ% ÊÕ F! '(8  =)β8wÊ ¡ E98WWÌUAaÖ¿> '4"<  ; (FI²"  WG8<# 2&#×H# @*F(8KP(@+¿#A Ø ¿Ùڐ  · µ –cAaۋ ( "™    [  %+TW; &FÜː  Ž4Œ= Ê %Ý2  Ü ¹ º N˜Ê,Ì Œ οR<  Þ   Ü ¹ º Ê µ AT_<"   +< " 3(G* ‡ G&  *9ß$ K   &  A A )  Ý ´ Þ N ŽŒ Ù Þ R ¹ º Ù ÝA N ¯à R á ("H  (! <";KP( ¼2N ½ ¾ š R AT_<"W 2# 2( * ;‹¿ ¿'   ¼2N š ½8R A â KŸ( .&@4KP( ;"¿#( 2&# (2K‰;# 2 ?c\'(&1Š" ‡ G&*FKP .&ŸKP4 ™ : 3(;0 H 4\>3 SG@Ac_<" 4@+¿U"W  ' 8‰ [X  "!ã N ‘(ä å R  G9F    . ×  [    *% &80-(T    K >   ;  'D " ¿X(    8 ;   å A5N ‡ [@  D8)’4G8#™N‹¯°°(È(R; G8." +æ51 Z"2 ™'( 8a X #@ ] "1?| "1#@(> ˆ/ ((>(A R2ç KŠ#(&>4"#T 2#"U X 7E‹]^3@  ™;(&@ @>|G8D&>™KŸ( X #j 2 Y  /A è éHê¿ëFìjíî ïPðñìjòUóôaõ2öìj÷ø¿õ>ó÷ ‡ F1>#4#8 2j>& #`(!#@> ˆ8  (!8H ' > ( X #A ù‰fPe ú.}˜ik kq r~a{(ilq r\o8yû¿p/mUu?nq r@sJuy8l k ç &    8    c+<  ü 2(&`&1  'D " 7Ö L …a‹;8L   3 2NPÖ L … L R "  +  >;   ' WG8(>A á (;"!# 2D"8( O(ýþ [1&?##%8G9 '2"2#@!'E*]^@#3(‰3@&4K       "!#$%& ('# ('#)*+, -&'/., "0102 3+4 -#5 &65'7.#  8 5'9 :";# ;96<=1>?<:" > /:"'.#  ')@:4@.#  )"   0#?55A?%!# B ;C> ('D #:<E <FG1>  H5EIJ%E K'5.L;/M  ')7%#7NO& #"<%P& # :# K+JE  :1'D" .#%Q <  R  .S  'UTV  -=  +JP'.XW R 5ZY3!"!1["\]Z^"5'#5I"_:&`a7`b:1+dcCYfe$?gh\Zi i j&k E lY>mon p1e,mqgSn p rs#Yb0#ut  1r\>\vI 5E#I   .2P0# t w1;C=hMI   < 6x1E)":`A%# .%qx&  '< :"'y5"<%L.2  +,'/>  :"'l?O16;/>5.z  '|{<}"-^1:" #`~F &fYM !1!"! \%,J<G  ' ) ` 1<%:  j k I 1>E :@   1M: '  R  €x&  ;6: `~ 0  ~    ` "<%:  ‚ I ">:  0#ƒA„  Tza.# <E  R %#5# C5  +,5'"%…5 † ‡ ‡ ˆ ‰ Š ‹ Œ ‡ ˆ Š Š ‰  ‡ Ž Ž Ž Ž   Ž ˆ ‰ Š ‘ ’ ‡ Ž Ž   Ž Ž “v”F””””” ” ”“””””%”f”””F”•&”F”–#””=”E”—”%” ” ”F˜ Žb™    š Ž Ž   Ž  ˆ ‰ › œ ‡ † œ Œ œ    † œ † ‡ œ ’ ˆ ‰ ž ˆ ‰ Š ˆ ‰ ‹ ˆ ‰ Ÿ ˆ ‰   Yb1\ Y R \ ¡  K)";     PY  \* :") Qf :")P  : : `   ' ; + R   : `   :  @)2   ' 3@¢ Z£ 1  5. -  ' ¤  :I C >` : +  ' <6: `   <  1   H   ;/   ' )   ` ; = B@¥ <  1   H   ?2 ,¦]>§ + R :  .2 ' :~  +, 'q:`h,0 >;#-E$`b:1 5"<E¤¢z Y R \ -:")Q3-:") -:&:`h'#;#+ R 5 :`l¨/:" / &)2 ('3 ¢z ©«ª­¬®"¯° ±G²³®2´AµZ¶ P·3'   H 3@  /   +  ' 5?O ,E1 ^Z -6:Z.  u<   +,  ' , q.  ()   E: ` <="4¸ ¹ º2» ¼­`b:"+½*35vTy@x&§_¢z?2@> u¾@:`lE# >; R MF«;>.A?/&'._`b:1 5"<E¤x&=;# :&`¿¢À;C>P0q.#  8 55'1 >; R >E:`~%E  '  ')7.E¤:&` >  ¾,¢ÁM:71: R q R - :Z1>5 , 8 5<E@:`v  6x&   R   =  >§7:"'7%#,5>;( ¿5>,5M;(F6,<:1+,  .:7¤<:1 :&`Â" :"EYÃ:1;# :&` "0102\v`Ä:"6`Ä;#G B ¥ <="M  HC5@Y  K&I   :";2€&  :5  +_  '#),ÅZ\ ·3' ¡   ) ;,2Yb1\I~@ -:4Y:"'y,=: ) Qf=: ) <=5\ 5>;(`Ä:"¢ i i  0 º? t  #? t1 '.  [  B :2:.P C M` : +  ' <E  ( : R %   ' .7.#:I ' E:_¢Æi i t"#?­   :";) P  @.2F  K:   %5 ` :  ¢Çi i  1[# B :#:#.7 CFM`Ä:"+J'/<E <5'È=>: R  : R E  '.P;C>  ')@ÉyÊË¢ Ì `Ä:"€&+, - I    ¢Íi i 0" º )":#:#.¤>;(E: R F&  '5.PI   _ÉPi i t# #?E 1t"@'/.z "[#·3' ¡   )";49Y R \~%# '2;+ R €:`O¨C:1  %&^15'q:P<E:1+, ;#* ÎZÏ ¥ x&=;#@:`~ÐUYf1 .#H'.P  '7w1;C  :"'L 1\   ¿ -:.P )   'C3 ¢  Ì   x15  HC5$@Ñ,Yb¢r \ <=  ' ) R 5x2  :";# :`lE#@<F:"+, #;#EE  :1'l ©ª ¬ ®"¯° ±=²D®9´AµÓÒ ]7`f1<¤EqE# R 53P C5>`Ä:"+J'<7: R %&  '.S  ¤<-:2>7:XYf'. >:1+4F  +, R >EO%'/\AE:&`#%#~`b;#= B¥ <%Ã1>  H/h;M  ' ) &-  ! hE%&  '  ' ) &+, - >;)")153€ 4 = : `     '#` : +   : '|  '  _    '  ' )y>E u R    ' )Z;#  G  ¾ 5.­? ' :3;C3 @%  )"FE<:   > /: ' .   ' )J¢Ô    '  ' )_ C:   ' E  :q   +,  ' @ (  u>;   'Z+ :   .9E    IZ<E:"+, /&.D7 /5>`b:1+_&'<7:` Z`b;#= B ¥ <=1>  H5J;>  '#)D:"'-§ÓL>; R M%P:`6%# %&  '#  '#)Z.#P:` M  ¾_¢y?Õ)2  'C3€E#Ö §#3E}"+Æ  #:#  +_   :"'ÓYI   E7ɳi i ¢7\€I   < <F:"+, #;#¢× ¢×+_  ,5  )"5'.#<E:"+, /:2>    :1'l? R ;2+_^"h;C>:&`h-ØPE%&  '  ') .# C:1  '2EÏ )   ' `b:1v"<,¢×I~ ;/>.q 0 .#  8 55'"&+, #=5h`Ã:"+] ØJ%E  '  ' ) &+, - Y  +  .PI    :";# 5  "< +  ' \ ?­ ' .P / >` : + 5.P /    .7<: +   ->: '    *%" ^ I 1 :_.# (<F  +, (' %,¸ º¼($`Ã:"+½E#@53:&`O@.# K)"  %?" ('q6H/E3 F# C5 K+,5'" ¿  +  '  ' .L3E ' .  .y.#5x  (  : 'z` :   J / >` : +  ' <,: ` 1<  <  1   H  *` :  ¢Ùi i Ú3ÛfÜaÝMÞ9ßà3ß á%â-ãà3à3ä å æ#áã%Ý>ä å ç5Üqß>è2é#ß>ê>ä å ë@ßÜ&Ý>àhßãáÞ,ìhí î(ï5ð$ñÃà3ß%ß ßòó#ãEÝ>ä å ç5Ü_ôõOövã5à$Ü"ç5ê>ë@ãâ(ä å ÷EßøÝMçÞ#ãEùß â(ßEÜ1ú5ÝMÞû5ü ý þ ÿ                               "!$#%# &   '             (*)                '   ( ) "!$#+#,&                   -/. 0 12          , 3       465 "!$! . 0 "! . 07                 8 9 :<;=    ) >@?BA>   C7 DA?E!GF H?JI ;K; (*) 7 ;L M . 0 N O>PIQ . 0 <R A  ;= #< !GSCA . 0  !GUTJ#   A!G# ( )EV 7 A  SQ>W #YX . 0 !GFZ!MF< [ "!G>M\ ]7 ;KW . 0 N >DIC . 0 R + . 0 R   #, 7 A  SCA, . 0 ! . 0 A  A?^ . 0_  T #  A!$#+[ Va` ? A > cb b    `   `   `  `   9 F dH+<#%"!GC#< > ## &,. 0!$. 0 A  A?e!$F   I  :Q>fA?[>M>WA>G . 0=P>MSCA>g!W #h?BA>f:CA!WF 7 ;BW . 0 N O>$ ` ;KA  R X . 0 !WFi!GF       # 465 "!$! . 0 "! . 037*7 A  S I !W#U?J>MA  !WF<*SC . 0 >W#%# . 0 1 a>M C7      `   `   `   `   >  R . 0&ai. 029j: ;   `  ; A R X[. 0 ! F 2A ? ! F k#,. 0 1l >  7    #i!GF  465 "!$! . 0="! . 0 7%7 A  S I !W#2?B>MA  !$F  SQ . 0>M#Z# . 0 1 O>W Q7  Zm %g!WFQ!f!WF8    SQ>g?BA>   C7  . 0 *;=X[  [:CO!g!$a>*?BA>d!MF<  8"!W>W\ n7 ;L M . 0 N O> fo  4"pqpGrMsqt b b    ;K; N &  uwv >MgI ;=!$>Mf . 0 R 8. 0 Nl7   !d ![!GF  ,xOy ;KO& ;   A!Gf;KgA!MFC![!$F f&> . 0  C7 D# I P!$Ak# . 0 1 >W  ! M  S8;=jA?z . 0_  . 0 E  ;=;=>j?{A >|!WF<   "!W>W\ }7 ;L M . 0 N O>j!WFQ  ?{A >|!WF<~?{I ;K; (*) 7 ;L M . 0 N O>          (*) T   V            €. 0 R T   V                      `                T   `   V         ,         `                 9q: ;     ASC > . 0gA+A ? ! F D >W> A >  >  !W[A ? ! F  ( ) T     V$` €. 0 R T   V$` [ T   `    Va`   T  `   V   #   T  `  V 7 ;L M . 0 N O>GE?BA>~!GF   # . 0 1 a>M  ![!Og‚   ƒ„†…~‡ˆ‰ ŠK‹Z‡'Œl%Ž  9 F<f!$F . 0 >$#YO SC> . 0    ! 7 A   . 0 # >G!MF<>MgI<;!$PA:,!O . 0  #h?{A>d!WF8!G  # . 0 1l > O,!!$  ‚ fA ? #,. 0= 7 > . 0. 0 C!M. 0  R A<+# . 0 R . 0 ! ?J> A ‘ ;=; A ? ! F %A! F  >  “’ ”9q: ;   X• R. 0 &!WF<  I  :CO>fA?>W>MA >$?{A >*?JAI<>f# . 0 1 >W  !fS<>W# . 0 7 !WA>$ T  V (D) T   V$` !GF U?JI<;=; (*) 7 ;– M . 0 N > — TB: V €. 0 R T  V ` !$F  (*) S >W# . 07 !MA >[IQ . 0 <R   a8 7 ![ . 0 R   #  7 A  SQA' . 0 ! . 0 A  A?q!GF  ?{I ;K;˜  !W> . 0 ”  #“>WO!G . 0 . 0 R+™ b b    . 0R   &;=I8%T 7 V  šT  `  V !WF<   "!G>M\  S8>W# . 07 !WA>IC . 0  R ™ b b   ` b b    T›# V  T   `   V   #œT{ V  T   `   V  9 F  > gI ; !$. 0 HA'"! 7  @ > %&  > “g. 0 . 0 ;  >  [A!W. 0 7 %! F !U! F  > gI ; !$ ? A > ž. 0 R T   V <# [ T   `   V  > [! F ŸM^ EA > :QO!g!W > ! F  ( ) T     V ? A >  AI'!/A ? ! F    7 g ` ! F  >MgI ;=!$jA?  zT  `   V >WE!WF8W  [ qA>€:QG!M!W>q!$F<  (D) T   V . 0 AI,!€A?  7 g   #H!WF<^>WgI<;!O€A?  lT  `  V >W[!$F *W  P€A>~:CO!g!W>~!MFC  (*) T   V . 0  AI !eA?  7 g  m dFQ& *;K@A . 0  S<;=    !W#   ,I<; 5 X~ . 0 R F ! 7 A H. 0 !g!W^A?qI<SU!WA   zT ™ ` V S<>M# . 07 5 !MA >$€?{A >[  ;=; ™   # ` :<I ![FC&  A!e?{AI  # . 0  S<>WA &     !$€A&>~f . 0 <R ;=dS<>M# . 0=7 !GA>•A? !MF<DW  Ÿ . 0 _   ¡†¢L¡ £ ‡,¤<ˆ‡'¥ ¥‰ ŠK¦QŒ“ƒ„ … ‡ˆ‰ ŠK‹Z‡ ŒC¥ m F<&[;–gAd!$"!G#D!$F    "!M>W\ § $!GF A8#fA  !$F ¨©<¨ ª,«¬<­^>W R >WW . 0 A  S<>MA :8;=  ` !G‚   ?{>WA  !MF<[® •’ >WOSQA' . 0 !WA> ¯ ° °±~²³G´,´µ µ,µ~³O¶'·,¸j³6¹l·,¶q³ ­,º ¹C´ »O¼ ª ­ ½¾¬ ´ ¿ ÀÁ ­ ± « ¸,¶° « Â,à ³"¯,° ¼ ª `  zI<g #d: *Ä'HA ; [C#*Ä 7 F \ ;=‚ AS ?šT    V  9 F žS > A: ;  F  . 0 S<I !q& > . 0=: ;  T  7  ; #d!MA _ O>MA      #I  . 0 !&> . 0 C7  V$`   P!$>$ . 0 <. 0 <R O<  S<;=[  #   !$"!O<  S<;= jm          ! "#$ ! %&'&$ ()*%+%,* !-+.!#,/0+1-+"3254 687 /119 )+  :'*    ;<3! 7 +/$ ! -3=>%-+?3<   3@2A-+CBED DGF@H+HH3IKJH+H   L J+H3I ?33%"#2M- #/N3O/$P  QR  S T 31U!2A-+BVD DWF L J 4 X Y[Z \^]`_baC]*]*Z \dcfe g !#h+!i ?#-Oh9 @  TjU35$0 k ""7   O  l-+;/ i 11-m h+ U& l)3 (n/ + &"#*o53"- 2p<3/- :"*'O   -+# OR  @ 2A-+  9qr/P+   nQ%   -+3C/   nb/  ) +/$o /3</U+4s>  C&"#@*o^";/- :C ?-*%2A -+ G     )+%#@K @   7   )+3" -+?1 >-2 3  t  *93  !#  -+#   .O @;?/ 9  U  T 1u "3"7   @<  -+#%-2v! l) '3o 2  /<    R*@    L 4xw% +3h   < )+y2    '  -3  l9  @  (  1  -  @/<@   Uz/-+ !"33 -+ 5<-+  <3-+1; C< { @   7 I ?*|-Q1U}zBE~i€"-&$ (-+}- 2   4€-+ "-?3‚' Tƒ   <3-+# #3f- 2b 7  !"3Pfv@h+-@#@u)-*-"2„-+ T /$%/!?# -+?3$  @T  3)h 1@u- 2BV-2…-Q1Uy 2.†33#3 b4ƒwKv‡  #/< @+@ I vKv-+PT 7 "#@/<  @ QS<@  -!Bˆ@i/-‚y?S T*9h&  ‚14vs>3 "Q" 7   T@   -|  1  + U <-!?# "   /P  Uz) -*-3;2„-+9.T‰.1  +! Q  Q    i3PŠC2M-  / i T  ) *h 1@ 3/OU   "    U 4 ‹|ŒŽp* ‘A’*“…”#’3•–’*Ž#—@˜ ™š ›œ3ž*Ÿ ƒ¡9¢£C¤›¢¥Ÿ@𠦍§P¢¥9œ*š ©lª+§P«*©v¬­ ® £5¯<«*£5£5­ ® ¢ž`£ ž*¬¨›œ*šž¢@žO¦¡9¢«*£%°>±^²…¤y¥š§Pš ¥&š š<£C³ƒœ*¢ œ*š©ª3š¬T­ ® ¡ ª+¥¢´@š›œ*­ ® £µª3ª3š¥'¶K· ¤T¸¥ <›š §P«*©l© ¦¹¯Ÿ ž*¢$³f©lš'¬+¸š<££5«ª*ª3¢¥5››œ ¥&¢@«¸Oœ¹ ¥š<£5š$¥¯&œ¹£^›«º ¬šž ›£5œ+­ ® ª§1¥¢¡»·!­ ® ¯ ¥&¢£5¢'§‚›µ¼uš<£5š'½9¯œT¾3› ¬¶ ¿rÀÁ$ÀÂ3À eC_ À ] à Ÿ@š ¥Ä+ŵ¶Æ>¶ÇC¶*È5ÉÊËË@Ì ¶uÍÎ*ÏvÐ+Ñ+ÒSÏ&ÓÔ Õ Öר3ÙMÓ5Ï5×Ù.Ò9ÏÐ*مÚ^ÛvÔ Õ Ð`Ù^ÏPÜOÓdרÏ5Ý Ñ*×ÙPÔ Õ ÚÐ`Þ&¶#ßuà§P¢¥¬ ¶ ¶áÅ8©1½Sš<ž3¬ ¢Ož ²á¥š<£5£<¶ â ­ ® ž*š@Ä3¤¶lÄãä¤ ¯ œ+š<­ ® žå3𠥏+Ä3æ ¶ ÈAçè@èèÌ ¶é3êiÖ Ô Õ ÏÐ*كëìíïî|×Ô Õ Ð*Ô Õ Ð Ü‡ð#ÞÔ Õ ÐOÜNñÚòôóMõƒ×Ð+öN÷KÏ ÓÐQÏØ#õuÏ.ø3ó ÓÏÞ<ÏÐ*ÙA×ÙPÔ Õ ÚÐ@¼uš<£5š$¥¯œT¼vš<ª3¢¥5›>¼ƒÅ ç É<ÊÉÉ Ì ¶…± à ·ùÆv¶3ú+¶ ™;<›£5¢ž!¼uš<£5š'½9¯œTÅpš<ž@›š ¥¶ û ­ ® š<ü$šÄb %¶lÄ3æ%ž*ž3'žÄ¼%¶lÄ#ã†ý…š$¡ ª3©1 Ĥ3¶ôÈ5É<ÊÊ@þ̶ ⠝£^›C·¢@ž@›&š º^ѽ9©l¢! u© ¸ ¢¥&­ ® ›œ+¡9£v§‚¢¥ÿ`ž3¬+­ ® ž ¸ ©l¢<³ƒºM¥ ž+Ÿzª*ª¥¢<à+­ ® ¡9›­ ® ¢ž*£$¶ÙPÎ…ÚÐ ÛÏ Ó5ÏÐ`ÖÏyÚÐ=ÙPÎ*ÏbÚÑ ÐO×ÙPÔ Õ ÚÐ*Þ Ú^Û …ÚÒvø3ÑÙ^ÏÓ!ë3Ö Ô Õ Ï Ð*ÖÏ È.ª*ª¶ Ë è  Ëþ Ì ¶ °>š<©.Äf¼%¶p·|¶ È É<Ê@Êþ Ì ¶}¼vš<¸¥š$£5£5­ ® ¢ž}ž3¬;¯<©1£5£5­ ® ÿ3¯<›­ ® ¢ž «*£5­ ® ž*¸«`£5£5­ ®(ž–ª ¥&¢ ¯<š £5£ª+¥­ ® ¢'¥&£ È ³f­ ® ›&œ ¬­ ® £5¯<«£5£5­ ® ¢@ž Ì ¶=±^žzú+¶…·¹¶ à š¥ž3½¬+¢¹š›9©.¶ È ¬ £<¶ Ì Äv×@ÏÞ5Ô Õ ×ÐiÞÙ.×ÙMÔ Õ ÞÙPÔ Õ Ö$Þ Ä Ë  è É@¶pßuà§‚¢'¥¬ >ž*­ ® ´@š¥£5­ ® ›^¦!²b¥š £5£<¶ ²á¥š£5£$Ä™r¶ÇK¶ÄbÆš<«*Ÿ@¢©l£5Ÿ@¦Ä¤3¶# %¶lÄýôš›5›š ¥&©­ ® ž+¸*Ä3™[¶#Æ>¶lÄbã â ©1ž*žš¥5¦ Ä Ã ¶²á¶ È É<ÊÊ çÌ ¶>Ñ Ò9ÏÓÔ Õ Öר õvÏ5ÖÔ Õ ø*Ï ÞÔ Õ Ð¶…Åp¡Kå ¥­ ®(¬+¸š>ž*­ ® ´Oš¥£5­ ® ›d¦²b¥š<£5£$¶3¤š$¯<¢ž3¬9𬠭 ® ›­ ® ¢@ž¶ ¤¯œ! ©lŸ@¢ª+§^Ä Ã ¶lÄá·:­ ® "iÄb¤¶lÄ Ã «+¥¸š<£$ÄÅu¶áú+¶áÅu¶lÄáš›C©.¶ È É<Ê@ÊÊ Ì ¶±„ž*ª*« ›%£5ª*¯šS´£>§Pš$›«+¥š £5ª3¯$š ­ ® ž ŸOš ¥ž*š © º^å*£5š$¬¡ š ›œ+¢+¬£<¶#^é…é…é€î¨×Ð+Þ ×@ÖÙ.Ô Õ ÚÐ*Þ%ÚÐ$%ÏÑ+Ódר%%Ï ÙPò…ÚÓ&ö'Þ&Ä'&)(+*-,/.Ä#É èèè *É è É<Ë ¶ ¤¯œ! ©lŸ@¢ª+§^Ä Ã ¶Äf¤¡ ¢@©1ă ¶lă㠷« ® 0(©l©lš ¥ÄfæN¶ ºA¼%¶ È É<Ê@Êþ Ì ¶ °u¢@ž+©­ ® ž*š½ ¯<¢¡9ª3¢ž`šž ›9ž*©(¦+£5­ ® £9£ ŸOš ¥ž*š ©Qš$­ ® ¸š<ž ´@©l«*šª ¥¢@å+©lš$¡:¶1%ÏÑ+Ó^ר…ÚÒvø3Ñل×ÙMÔ Õ ÚÐ+Ä'&)('ÄÉ ç ÊÊ É É<Ê ¶ ¤¡ ¢©1Ä3 ¶3ú+¶lÄã[¤+¯œ! ©lŸ@¢@ª§5Ä Ã ¶È ç@èèè Ì ¶u¤ ª3½9£5š2u¥š<š$¬¦ ·!›5¥­ ® à  uªª+¥¢<à­ ® ¡<›­ ® ¢ž§.¢¥· ¯œ*­ ® ž*š ¾ š½ ž*­ ® ž ¸ ¶3 Ó5Ú<Ö Ï&Ï4Ô Õ Ð ÜÞ ÚdÛ ÙPÎ*Ï%ëÏ65ÏÐÙ^ÏÏÐ*Ù1Î # Ð+ÙdÏ&Ó Ð*×Ù.Ô Õ ÚÐ3ר1…ÚÐ ÛÏ Ó5ÏÐ*Ö Ï ÚÐ í ×@ÖÎ Ô Õ ÐÏ ñ Ï5×ÓÐ*ó Ô Õ ÐÜ ¶…· ¢ ¥ ¸@ ž!æ « §.¡  ž*ž#¶ ý8'ª*ž+­ ® ŸÄýC¶°C¶ È ÉÊÊ Ì ¶…Í#Î*ÏpÐ3×ÙMÑ ÓÏfÚdÛfÞÙ.×ÙMÔ Õ ÞÙPÔ Õ Öר*ØPÏ^×Ó Ð+Ô Õ Ð ÜقÎ`Ï^ÚÓ7'¶*°>š ³8…¢¥&Ÿ ¶ ¶á¤ª¥­ ® ž*¸š ¥bý…š ¥&©1'¸+¶ ™; œ å3 Ä  ¶8È5É Ê@Ê è̶ ë ø Ø Ô Õ ÐÏ9ÒNÚ  Ï&ØÞfÛ$ÚÓ9Ú:9<Þ ÏÓ65×Ù.Ô Õ ÚÐ3ר;@×ÙA× ¶|²ôœ*­ ® © ¬+š ©lª*œ`­ ®( Äp²  ¶ ¶ ¤ ¢¯ ­ ® š›^¦ § ¢ ¥ ±^ž ¬«`£^› ¥­ ®( ©  ž ¬  vª+ª*©l­ ® 𬠷 <› œ š ¡ <› ­ ® ¯$£ ¶ Åfà · ¤º ° ¤*â ¼uš ¸ ­ ® ¢ ž  © Å8¢ ž§ š ¥ š ž ¯<šv£5š ¥­ ® š$£ ­ ® ž  ª*ª*©­ ® š<¬ ¡ <› œ š ¡ <› ­ ® ¯<£ ¶ ™–­ ® ©l©­ ®(¡ £$ÄQŵ¶*æ ¶±¶lÄ3ã à ½9å3š ¥Ä!<C¶ È ÉÊÊþ Ì ¶ à  ¦ š<£5­ ® ž ¯<©1£5£5­ ® ÿ3¯<›­ ® ¢ž ³u­ ® ›œ=«£5£5­ ®(ž:ª¥¢¯ š<£5£5š$£<¶ #^é…é…é€î¨×Ð+Þ ×@ÖÙ.Ô Õ ÚÐ*Þ%ÚÐ$3ƒ×Ù.Ù^ÏÓ&Ð?>uÐ3ר@Þ5Ô Õ Þ ×ÐA í|×@ÖÎ Ô Õ ÐÏ#Ð*لÏØPØ(Ô Õ ÜÏ Ð*Ö ÏÄCB ( * &DB . ÄáÉ  ç *É É@¶
2000
23
1,821
Sex with Support Vector Machines Baback Moghaddam Mitsubishi Electric Research Laboratory Cambridge MA 02139, USA baback<amerl.com Ming-Hsuan Yang University of Illinois at Urbana-Champaign Urbana, IL 61801 USA mhyang<avision.ai.uiuc.edu Abstract Nonlinear Support Vector Machines (SVMs) are investigated for visual sex classification with low resolution "thumbnail" faces (21by-12 pixels) processed from 1,755 images from the FE RET face database. The performance of SVMs is shown to be superior to traditional pattern classifiers (Linear, Quadratic, Fisher Linear Discriminant, Nearest-Neighbor) as well as more modern techniques such as Radial Basis Function (RBF) classifiers and large ensembleRBF networks. Furthermore, the SVM performance (3.4% error) is currently the best result reported in the open literature. 1 Introduction In recent years, SVMs have been successfully applied to various tasks in computational face-processing. These include face detection [14], face pose discrimination [12] and face recognition [16]. Although facial sex classification has attracted much attention in the psychological literature [1, 4, 8, 15], relatively few computatinal learning methods have been proposed. We will briefly review and summarize the prior art in facial sex classification. l Gollomb et at. [10] trained a fully connected two-layer neural network, SEXNET, to identify sex from 30-by-30 face images. Their experiments on a set of 90 photos (45 males and 45 females) gave an average error rate of 8.1% compared to an average error rate of 11.6% from a study of five human subjects. Brunelli and Poggio [2] lSex classification is also referred to as gender classification (for political correctness). However, given the two distinct biological classes, the scientifically correct term is sex classification. Gender often denotes a fuzzy continuum of feminine ~ masculine [1). Gender Classifier Fu.... __ --1-___ ---.J M F M L-____ ~-+ ________ ~ F M L--___ _ Figure 1: Sex classifier developed HyperBF networks for sex classification in which two competing RBF networks, one for male and the other for female, were trained using 16 geometric features as inputs (e.g., pupil to eyebrow separation, eyebrow thickness, and nose width). The results on a data set of 168 images (21 males and 21 females) show an average error rate of 21%. Using similar techniques as Golomb et al. [10] and Cottrell and Metcalfe [6], Tamura et al. [18] used multi-layer neural networks to classify sex from face images at multiple resolutions (from 32-by-32 to 8-by-8 pixels). Their experiments on 30 test images show that their network was able to determine sex from 8-by-8 images with an average error rate of 7%. Instead of using a vector of gray levels to represent faces, Wiskott et al. [20] used labeled graphs of two-dimensional views to describe faces. The nodes were represented by waveletbased local "jets" and the edges were labeled with distance vectors similar to the geometric features in [3]. They used a small set of controlled model graphs of males and females to encode "general face knowledge," in order to generate graphs of new faces by elastic graph matching. For each new face, a composite reconstruction was generated using the nodes in the model graphs. The sex of the majority of nodes used in the composite graph was used for classification. The error rate of their experiments on a gallery of 112 face images was 9.8%. Recently, Gutta et al. [11] proposed a hybrid classifier based on neural networks (RBFs) and inductive decision trees with Quinlan's C4.5 algorithm. Experiments with 3000 FERET faces of size 64-by-72 pixels yielded an error rate of 4%. 2 Sex Classifiers A generic sex classifier is shown in Figure 1. An input facial image x generates a scalar output f(x) whose polarity - sign of f(x) - determines class membership. The magnitude II f (x II can usually be interpreted as a measure of belief or certainty in the decision made. Nearly all binary classifiers can be viewed in these terms; for densitybased classifiers (Linear, Quadratic and Fisher) the output function f(x) is a log likelihood ratio, whereas for kernel-based classifiers (Nearest-Neighbor, RBFs and SVMs) the output is a "potential field" related to the distance from the separating boundary. 2.1 Support Vector Machines A Support Vector Machine is a learning algorithm for pattern classification and regression [19, 5]. The basic training principle behind SVMs is finding the optimal linear hyperplane such that the expected classification error for unseen test samples is minimized i.e., good generalization performance. According to the structural risk minimization inductive principle [19], a function that classifies the training data accurately and which belongs to a set of functions with the lowest VC dimension [5] will generalize best regardless of the dimensionality of the input space. Based on this principle, a linear SVM uses a systematic approach to finding a class of functions with the lowest VC dimension. For linearly non-separable data, SVMs can (nonlinearly) map the input to a high dimensional feature space where a linear hyperplane can be found. Although there is no guarantee that a linear separable solution will always exist in the high dimensional space, in practice it is quite feasible to construct a working solution. Given a labeled set of M training samples (Xi , Yi), where Xi E RN and Yi is the associated label (Yi E {-I, I}), a SVM classifier finds the optimal hyperplane that correctly separates (classifies) the data points while maximizing the distance of either class from the hyperplane (the margin). Vapnik [19] shows that maximizing the margin distance is equivalent to minimizing the VC dimension in constructing an optimal hyperplane. Computing the best hyperplane is posed as a constrained optimization problem and solved using quadratic programming techniques. The discriminant hyperplane is defined by the level set of M f(x) = L Yi Q:i . k(x, Xi) + b i=l where k(·, ·) is a kernel function and the sign of f(x) determines the membership of x. Constructing an optimal hyperplane is equivalent to finding all the nonzero Q:i . Any vector Xi that corresponds to a nonzero Q:i is a supported vector (SV) of the optimal hyperplane. A desirable feature of SVMs is that the number of training points which are retained as support vectors is usually quite small, thus providing a compact classifier. For a linear SVM, the kernel function is just a simple dot product in the input space while the kernel function in a nonlinear SVM effectively projects the samples to a feature space of higher (possibly infinite) dimension via a nonlinear mapping function: ~ : RN ~ F M , M» N and then constructs a hyperplane in F. The motivation behind this mapping is that it makes possible a larger class of discriminative functions with which to find a linear hyperplane in the high dimensional feature space. Using Mercer's theorem [7], the expensive calculations required in projecting samples into the high dimensional feature space can be replaced by a much simpler kernel function satisfying the condition k(X,Xi) = ~(x) . ~(Xi) where ~ is the implicit nonlinear projection. Several kernel functions, such as polynomials and radial basis functions, have been shown to satisfy Mercer's theorem I--_L-__ ~_ Figure 2: Automatic face alignment system [13]. Figure 3: Some processed FE RET faces, at high resolution and have been used successfully in nonlinear SVMs. In fact, by using different kernel functions, SVMs can implement a variety of learning machines, some of which coincide with classical architectures. Nevertheless, automatic selection of the "right" kernel function and its associated parameters remains problematic and in practice one must resort to trial and error for model selection. A radial basis function (RBF) network is also a kernel-based technique for improved generalization, but it is based instead on regularization theory [17]. The number of radial bases and their centers in a conventional RBF network is predetermined, often by k-means clustering. In contrast, a SVM with the same RBF kernel will automatically determine the number and location of the centers, as well as the weights and thresholds that minimize an upper bound on the expected risk. Recently, Evgeniou et al. [9] have shown that both SVMs and RBFs can be formulated under a unified framework in the context of Vapnik's theory of statistical learning [19]. As such, SVMs provide a more systematic approach to classification than classical RBF and various other neural networks. 3 Experiments In our study, 256-by-384 pixel FERET "mug-shots" were pre-processed using an automatic face-processing system which compensates for translation, scale, as well as slight rotations. Shown in Figure 2, this system is described in detail in [13] and uses maximum-likelihood estimation for face detection, affine warping for geometric shape alignment and contrast normalization for ambient lighting variations. The resulting output "face-prints" in Figure 2 were standardized to 80-by-40 (full) resolution. These "face-prints" were further sub-sampled to 21-by12 pixel "thumbnails" for our low resolution experiments. Figure 3 shows a few examples of processed face-prints (note that these faces contain little or no hair information). A total of 1755 thumbnails (1044 males and 711 females) were used Classifier Error Rate Overall Male Female SVM with Gaussian RBF kernel 3.38% 2.05% 4.79% SVM with Cubic polynomial kernel 4.88% 4.21% 5.59% Large ensemble-RBF 5.54% 4.59% 6.55% Classical RBF 7.79% 6.89% 8.75% Quadratic classifier 10.63% 9.44% 11.88% Fisher linear discriminant 13.03% 12.31% 13.78% Nearest neighbor 27.16% 26.53% 28.04% Linear classifier 58.95% 58.47% 59.45% Table 1: Experimental results with thumbnails. in our experiments. For each classifier, the average error rate was estimated with 5-fold cross validation (CV) i. e., a 5-way dataset split, with 4/5th used for training and 1/ 5th used for testing, with 4 subsequent non-overlapping rotations. The average size of the training set was 1496 (793 males and 713 females) and the average size of the test set was 259 (133 males and 126 females) . SVM wI RBF kernel • SVM wI cubic poly. kernel Large ensemble of RBF Classical RBF Quadratic classifier Fisher linear discriminant Nearest neighbor Li near classifier 0 10 20 30 40 50 60 Error Rate Figure 4: Error rates of various classifiers The SVM classifier was first tested with various kernels in order to explore the space of models and performance. A Gaussian RBF kernel was found to perform the best (in terms of error rate), followed by a cubic polynomial kernel as second best. In the large ensemble-RBF experiment, the number of radial bases was incremented until the error fell below a set threshold. The average number of radial bases in the large ensemble-RBF was found to be 1289 which corresponds to 86% of the training set. The number of radial bases for classical RBF networks was heuristically set to 20 prior to actual training and testing. Quadratic, Linear and Fisher classifiers were implemented using Gaussian distributions and in each case a likelihood ratio test was used for classification. The average error rates of all the classifiers tested with 21-by-12 pixel thumbnails are reported in Table 1 and summarized in Figure 3. The SVMs out-performed all other classifiers, although the performance of large ensemble-RBF networks was close to SVMs. However, nearly 90% of the training set was retained as radial bases by the large ensemble-RBF. In contrast, the number of support vectors found by both SVMs was only about 20% of the training set. We also applied SVMs to classification based on high resolution images. The Gaussian and cubic kernel performed equally well at both low and high resolutions with only a slight 1% error rate difference. We note that as indicated in Table 1, all the classifiers had higher error rates in classifying females, most likely due to the general lack of prominent and distinct facial features in female faces, as compared to males. 4 Discussion We have presented a comprehensive evaluation of various classification methods for determination of sex from facial images. The non-triviality of this task (made even harder by our "hairless" low resolution faces) is demonstrated by the fact that a linear classifier had an error rate of 60% (i.e., worse than a random coin flip). Furthermore, an acceptable error rate « 5%) for the large ensemble-RBF network required storage of 86% of the training set (SVMs required about 20%). Storage of the entire dataset in the form of the nearest-neighbor classifier yielded too high an error rate (30%). Clearly, SVMs succeeded in the difficult task of finding a nearoptimal class partition in face space with the added economy of a small number of support faces. Given the relative success of previous studies with low resolution faces it is reassuring that 21-by-12 faces can, in fact, be used for reliable sex classification. However, most of the previous studies used datasets of relatively few faces, consequently with little statistical significance in the reported results. The most directly comparable study to ours is that of Gutta et al. [11], which also used FERET faces. With a dataset of 3000 faces at a resolution of 64-by-72, their hybrid RBF /Decision-Tree classifier achieved a 4% error rate. In our study, with 1800 faces at a resolution of 21-by-12, a Gaussian kernel SVM was able to achieve a 3.4% error rate. Both studies use extensive cross validation to estimate the error rates. Given our results with SVMs, it is clear that better performance at even lower resolutions is possible with this learning technique. References [1] V. Bruce, A. M. Burton, N. Dench, E. Hanna, P. Healey, O. Mason, A. Coombes, R. Fright, and A. Linney. Sex discrimination: How do we tell the difference between male and female faces? Perception, 22:131- 152, 1993. [2] R. Brunelli and T. Poggio. Hyperbf networks for gender classification. In Proceedings of the DARPA Image Understanding Workshop, pages 311-314, 1992. [3] R. Brunelli and T. Poggio. Face recognition: Features vs. templates. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(10), October 1993. [4] A. M. Burton, V. Bruce, and N. Dench. What's the difference between men and women? evidence from facial measurement. Perception, 22:153- 176, 1993. [5] C. Cortes and V. Vapnik. Support vector networks. Machine Learning, 20, 1995. [6] Garrison W. Cottrell. Empath: Face, emotion, and gender recognition using holons. In Advances in Neural Information Processing Systems, pages 564-571, 1991. [7] R. Courant and D. Hilbert. Methods of Mathematical Physiacs, volume 1. Interscience, New-York, 1953. [8] B. Edelman, D. Valentin, and H. Abdi. Sex classification of face areas: how well can a linear neural network predict human performance. Journal of Biological System, 6(3):241- 264, 1998. [9] Theodoros Evgeniou, Massimiliano Pontil, and Tomaso Poggio. A unified framework for regularization networks and support vector machines. Technical Report AI Memo No. 1654, MIT, 1999. [10] B. A. Golomb, D. T. Lawrence, and T. J. Sejnowski. Sexnet: A neural network identifies sex from human faces. In Advances in Neural Information Processing Systems, pages 572-577, 1991. [11] S. Gutta, H. Wechsler, and P. J. Phillips. Gender and ethnic classification. In Proceedings of the IEEE International Automatic Face and Gesture Recognition, pages 194-199, 1998. [12] J. Huang, X. Shao, and H. Wechsler. Face pose discrimination using support vector machines. In Proc. of 14th Int'l Conf. on Pattern Recognition (ICPR '98), pages 154156, August 1998. [13] B. Moghaddam and A. Pentland. Probabilistic visual learning for object representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI19(7):696- 710, July 1997. [14] E. Osuna, R. Freund, and F. Girosi. Training support vector machines: an application to face detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 130-136, 1997. [15] A. J. O'Toole, T. Vetter, N. F. Troje, and H. H. Bulthoff. Sex classification is better with three-dimensional structure than with image intensity information. Perception, 26:75- 84, 1997. [16] P. J. Phillips. Support vector machines applied to face recognition. In M. S. Kearns, S. Solla, and D. Cohen, editors, Advances in Neural Information Processing Systems 11, volume 11, pages 803- 809. MIT Press, 1998. [17] T. Poggio and F. Girosi. Networks for approximation and learning. Proceedings of the IEEE, 78(9):1481- 1497, 1990. [18] S. Tamura, H. Kawai, and H. Mitsumoto. Male/female identification from 8 x 6 very low resolution face images by neural network. Pattern Recognition, 29(2):331- 335, 1996. [19] V. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995. [20] Laurenz Wiskott, Jean-Marc Fellous, Norbert Kruger, and Christoph von der Malsburg. Face recognition and gender determination. In Proceedings of the International Workshop on Automatic Face and Gesture Recognition, pages 92- 97, 1995.
2000
24
1,822
Recognizing Hand-written Digits Using Hierarchical Products of Experts Guy Mayraz & Geoffrey E. Hinton Gatsby Computational Neuroscience Unit University College London 17 Queen Square, London WCIN 3AR, u.K. Abstract The product of experts learning procedure [1] can discover a set of stochastic binary features that constitute a non-linear generative model of handwritten images of digits. The quality of generative models learned in this way can be assessed by learning a separate model for each class of digit and then comparing the unnormalized probabilities of test images under the 10 different class-specific models. To improve discriminative performance, it is helpful to learn a hierarchy of separate models for each digit class. Each model in the hierarchy has one layer of hidden units and the nth level model is trained on data that consists of the activities of the hidden units in the already trained (n - l)th level model. After training, each level produces a separate, unnormalized log probabilty score. With a three-level hierarchy for each of the 10 digit classes, a test image produces 30 scores which can be used as inputs to a supervised, logistic classification network that is trained on separate data. On the MNIST database, our system is comparable with current state-of-the-art discriminative methods, demonstrating that the product of experts learning procedure can produce effective generative models of high-dimensional data. 1 Learning products of stochastic binary experts Hinton [1] describes a learning algorithm for probabilistic generative models that are composed of a number of experts. Each expert specifies a probability distribution over the visible variables and the experts are combined by multiplying these distributions together and renormalizing. (1) where d is a data vector in a discrete space, Om is all the parameters of individual model m, Pm(dIOm) is the probability of d under model m, and c is an index over all possible vectors in the data space. A Restricted Boltzmann machine [2, 3] is a special case of a product of experts in which each expert is a single, binary stochastic hidden unit that has symmetrical connections to a set of visible units, and connections between the hidden units are forbidden. Inference in an RBM is much easier than in a general Boltzmann machine and it is also much easier than in a causal belief net because there is no explaining away. There is therefore no need to perform any iteration to determine the activities of the hidden units. The hidden states, Sj , are conditionally independent given the visible states, Si, and the distribution of Sj is given by the standard logistic function: 1 p(Sj = 1) = (2) 1 + exp( - Li WijSi) Conversely, the hidden states of an RBM are marginally dependent so it is easy for an RBM to learn population codes in which units may be highly correlated. It is hard to do this in causal belief nets with one hidden layer because the generative model of a causal belief net assumes marginal independence. An RBM can be trained using the standard Boltzmann machine learning algorithm which follows a noisy but unbiased estimate of the gradient of the log likelihood of the data. One way to implement this algorithm is to start the network with a data vector on the visible units and then to alternate between updating all of the hidden units in parallel and updating all of the visible units in parallel. Each update picks a binary state for a unit from its posterior distribution given the current states of all the units in the other set. If this alternating Gibbs sampling is run to equilibrium, there is a very simple way to update the weights so as to minimize the Kullback-Leibler divergence, QOIIQoo, between the data distribution, QO, and the equilibrium distribution of fantasies over the visible units, Qoo, produced by the RBM [4]: flWij oc <SiSj>QO <SiSj>Q~ (3) where < SiSj >Qo is the expected value of SiSj when data is clamped on the visible units and the hidden states are sampled from their conditional distribution given the data, and <SiSj>Q~ is the expected value of SiSj after prolonged Gibbs sampling. This learning rule does not work well because it can take a long time to approach thermal equilibrium and the sampling noise in the estimate of <SiSj>Q ~ can swamp the gradient. [1] shows that it is far more effective to minimize the difference between QOllQoo and Q111Qoo where Q1 is the distribution of the one-step reconstructions of the data that are produced by first picking binary hidden states from their conditional distribution given the data and then picking binary visible states from their conditional distribution given the hidden states. The exact gradient of this "contrastive divergence" is complicated because the distribution Q1 depends on the weights, but [1] shows that this dependence can safely be ignored to yield a simple and effective learning rule for following the approximate gradient of the contrastive divergence: flWij oc <SiSj>QO <SiSj>Ql (4) For images of digits, it is possible to apply Eq. 4 directly if we use stochastic binary pixel intensities, but it is more effective to normalize the intensities to lie in the range [0,1] and then to use these real values as the inputs to the hidden units. During reconstruction, the stochastic binary pixel intensities required by Eq. 4 are also replaced by real-valued probabilities. Finally, the learning rule can be made less noisy by replacing the stochastic binary activities of the hidden units by their expected values. So the learning rule we actually use is: flWij oc <PiPj>QO <PiPj>Ql (5) Stochastically chosen binary states of the hidden units are still used for computing the probabilities of the reconstructed pixels. This prevents each real-valued hidden probability from conveying more than 1 bit of information to the reconstruction. 2 The MNIST database MNIST, a standard database for testing digit recognition algorithms, is available at http://www.rese arc h. att. co m/~y a nn /ocr/ mnist / index.html.MNIST METHOD % ERRORS Linear classifier (I-layer NN) 12.0 K-nearest-neighbors, Euclidean 5.0 1000 RBF + linear classifier 3.6 Best Back-Prop: 3-layer NN, 500+ 150 hidden units 2.95 Reduced Set SVM deg 5 polynomial 1.0 LeNet-l [with 16x16 input] 1.7 LeNet-5 0.95 Product of Experts (separate 3-layer net for each model) 1.7 Table 1: Performance of various learning methods on the MNIST test set. has 60,000 training images and 10,000 test images. Images are highly variable in style but are size-normalized and translated so that the center of gravity of their intensity lies at the center of a fixed-size image of 28 by 28 pixels. A number of well-known learning algorithms have been run on the MNIST database[5], so it is easy to assess the relative performance of a novel algorithm. Some of the experiments in [5] included deskewing images or augmenting the training set with distorted versions of the original images. We did not use deskewing or distortions in our main experiments, so we only compare our results with other methods that did not use them. The results in Table 1 should be treated with caution. Some attempts to replicate the degree 5 polynomial SVM have produced slightly higher error rates of 1.4% [6] and standard backpropagation can be carefully tuned to achieve under 2% (John Platt, personal communication). Table 1 shows that it is possible to achieve a result that is comparable with the best discriminative techniques by using multiple PoE models of each digit class to extract scores that represent unnormalized log probabilities. These scores are then used as the inputs to a simple logistic classifier. The fact that a system based on generative models can come close to the very best discriminative systems suggests that the generative models are doing a good job of capturing the distributions. 3 Training the individual PoE models The MNIST database contains an average of 6,000 training examples per digit, but these examples are unevenly distributed among the digit classes. In order to simplify the research we produced a balanced database by using only 5,400 examples of each digit. The first 4,400 examples were the unsupervised training set used for training the individual PoE models. The remaining examples of each of the 10 digits constituted the supervised training set used for training the logistic classification net that converts the scores of all the PoE models into a classification. The original intensity range in the MNIST images was 0 to 255. This was normalized to the range 0 to 1 so that we could treat intensities as probabilities. The normalized pixel intensities were used as the initial activities of the 784 visible units corresponding to the 28 by 28 pixels. The visible units were fully connected to a single layer of hidden units. The weights between the input and hidden layer were initialized to small, zero-mean, Gaussiandistributed, random values. The 4,400 training examples were divided into 44 mini-batches. One epoch of learning consisted of a pass through all 44 mini batches in fixed order with the weights being updated after each minibatch. We used a momentum method with a small 1 1 7 · 7 '1 , 7 '7 7 · 7 7 '7 • • • . . • • • • 234 5 6 7 8 9 digit to be explained 1 7 " ::r1 t( f 9 1 Cf '/ 9 :::r 7 -"{ , ? -:t-1 t( f 9 ~ 1 q 7 9 7 J f , J J Figure 1: The areas of the blobs show the mean goodness of validation set digits using only the first-level models with 500 hidden units (white is positive). A different constant is added to all the goodness scores of each model so that rows sum to zero. Successful discrimination depends on models being better on their own class than other models are. The converse is not true: models can be better reconstructing other, easier classes of digits than their own class. Figure 2: Cross reconstruction of 7s and 9s with models containing 25 hidden units (top) and 100 hidden units (bottom). The central horizontal line in each block contains originals, and the lines above and below are reconstructions by the 7s and 9s models respectively. Both models produce stereotyped digits in the small net and much better reconstructions in the large one for both the digit classes. The 9s model sometimes tries to close the loop in 7s, and the 7s model tries to open the loop in 9s. amount of weight decay, so the change in a weight after the tth minibatch was: ~wL = J..L~wtt + 0.1 ((PiPj)Q~ - (PiPj)Q: - o.OOOlwL) (6) where Q~ and Qt are averages over the data or the one-step reconstructions for minibatch t, and the momentum, J..L, was 0 for the first 50 weight changes and 0.9 thereafter. The hidden and visible biases, bi and bj , were initialized to zero. Their values were similarly altered (by treating them like connections to a unit that was always on) but with no weight decay. Rather than picking one particular number of hidden units, we trained networks with various different numbers of units and then used discriminative performance on the validation set to decide on the most effective number of hidden units. The largest network was the best, even though each digit model contains 392,500 parameters trained on only 4,400 images. The receptive fields learned by the hidden units are quite local. Since the hidden units are fully connected and have random initial weights the learning procedure must infer the spatial proximity of pixels from the statistics of their joint activities. Figure 1 shows the mean goodness scores of all 10 models on all 10 digit classes. Figure 2 shows reconstructions produced by the bottom-level models on previously unseen data from the digit class they were trained on and also on data from a different digit class. With 500 hidden units, the 7s model is almost perfect at reconstructing 9s. This is because a model gets better at reconstructing more or less any image as its set of available features becomes more varied and more local. Despite this, the larger networks give better discriminative information. 3.1 Multi-layer models Networks that use a single layer of hidden units and do not allow connections within a layer have some major advantages over more general networks. With an image clamped on the visible units, the hidden units are conditionally independent. So it is possible to compute an unbiased sample of the binary states of the hidden units without any iteration. This property makes PoE's easy to train and it is lost in more general architectures. If, for example, we introduce a second hidden layer that is symmetrically connected to the first hidden layer, it is no longer straightforward to compute the posterior expected activity of a unit in the first hidden layer when given an image that is assumed to have been generated by the multilayer model at thermal equilibrium. The posterior distribution can be computed by alternating Gibbs sampling between the two hidden layers, but this is slow and noisy. Fortunately, if our ultimate goal is discrimination, there is a computationally convenient alternative to using a multilayer Boltzmann machine. Having trained a one-hidden-layer PoE on a set of images, it is easy to compute the expected activities of the hidden units on each image in the training set. These hidden activity vectors will themselves have interesting statistical structure because a PoE is not attempting to find independent causes and has no implicit penalty for using hidden units that are marginally highly correlated. So we can learn a completely separate PoE model in which the activity vectors of the hidden units are treated as the observed data and a new layer of hidden units learns to model the structure of this "data". It is not entirely clear how this second-level PoE model helps as a way of modelling the original image distribution, but it is clear that if a first-level PoE is trained on images of 2's, we would expect the vectors of hidden activities to be be very different when it is presented with a 3, even if the features it has learned are quite good at reconstructing the 3. So a second-level model should be able to assign high scores to the vectors of hidden activities that are typical of the 2 model when it is given images of 2's and low scores to the hidden activities of the 2 model when it is given images that contain combinations of features that are not normally present at the same time in a 2. We used a three-level hierarchy of PoE's for each digit class. The levels were trained sequentially and to simplify the research we always used the same number of hidden units at each level. We trained models of five different sizes with 25, 100, 200, 400, and 500 hidden units per level. 4 The logistic classification network An attractive aspect of PoE's is that it is easy to compute the numerator in Eq. 1 so it is easy to compute a goodness score which is equal to the log probability of a data vector up to an additive constant. Figure 3 show the goodness of the 7s and 9s models (the most difficult pair of digits to discriminate) when presented with test images of both 7s and 9s. It can be seen that a line can be passed that separates the two digit sets almost perfectly. It is also encouraging that all of the errors are close to the decision boundary, so there are no confident misclassifications. The classification network had 10 output units, each of which computed a logit, x, that was a linear function of the goodness scores, g, of the various PoE models, m, on an image, c. The probability assigned to class j was then computed by taking a "softmax" of the logits: exj Pc ------0j '" X C L..Jk e k xj = bj + Lg~Wmj (7) m There were 10 digit classes each with a three-level hierarchy of PoE models, so the classification network had 30 inputs and therefore 300 weights and 10 output biases. Both weights and biases were initialized to zero. The weights were learned by a momentum version of (a) (b) 210 180 200 220 240 260 score under 7s model (1 st layer) Figure 3: Validation set cross goodness results of (a) the first-level model and (b) the thirdlevel model of 7s and 9s. All models have 500 hidden units. The third-level models clearly give higher goodness scores for second-level hidden activities in their own hierarchy than for the hidden activities in the other hierarchy. gradient ascent in the log probability assigned to the correct class. Since there were only 310 weights to train, little effort was devoted to making the learning efficient. ~Wmj(t) = J-L~wmj(t-l) + 0.0002 Lg~(tj - pi) (8) c where tj is 1 if class j is the correct answer for training case c and 0 otherwise. The momentum J-L was 0.9. The biases were treated as if they were weights from an input that always had a value of 1 and were learned in exactly the same way. In each training epoch the weight changes were averaged over the whole supervised training setl. We used separate data for training the classification network because we expect the goodness score produced by a PoE of a given class to be worse and more variable on exemplars of that class that were not used to train the PoE and it is these poor and noisy scores that are relevant for the real, unseen test data. The training algorithm was run using goodness scores from PoE networks with different numbers of hidden units. The results in Table 2 show a consistent improvement in classification error as the number of units in the hidden layers of each PoE increase. There is no evidence for over-fitting, even though large PoE's are very good at reconstructing images of other digit classes or the hidden activity vectors of lower-level models in other hierarchies. It is possible to reduce the error rate by a further 0.1 % by averaging together the goodness scores of corresponding levels of model hierarchies with 100 or more units per layer, but this model averaging is not nearly as effective as using extra levels. 5 Model-based normalization The results of our current system are still not nearly as good as human performance. In particular, it appears the network has only a very limited understanding of image invari1 We held back part of the supervised training set to use as a validation set in determining the optimal number of epochs to train the classification net, but once this was decided we retrained on all the supervised training data for that number of epochs. Network size Learning epochs % Errors Table 2: MNIST test set error rate as a function of the number of hidden units per level. There is no evidence of overfitting even when over 250,000 parameters are trained on only 4,400 examples. 25 25 3.8 100 100 2.3 200 200 2.2 400 200 2.0 500 500 1.7 ances. This is not surprising since it is trained on prenormalized data. Dealing with image invariances better will be essential for approaching human performance. The fact that we are using generative models suggests an interesting way of refining the image normalization. If the normalization of an image is slightly wrong we would expect it to have lower probability under the correct class-specific model. So we should be able to use the gradient of the goodness score to iteratively adjust the normalization so that the data fits the model better. Using x translation as an example, 8C __ "" 8si 8C 8C "" ~ -8. = bi + ~ SjWji 8x i 8x 8si S. j where Si is the intensity of pixel i. 8si/8x is easily computed from the intensities of the left and right neighbors of pixel i and 8C / 8si is just the top-down input to a pixel during reconstruction. Preliminary simulations by Yee Whye Teh on poorly normalized data show that this type of model-based renormalization improves the score of the correct model much more than the scores of the incorrect ones and thus eliminates most of the classification errors. Acknowledgments We thank Yann Le Cun, Mike Revow and members of the Gatsby Unit for helpful discussions. This research was funded the Gatsby Charitable Foundation. References [1] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Technical Report GeNU TR 2000-004, Gatsby Computational Neuroscience Unit, University College London, 2000. [2] P. Smolensky. Information processing in dynamical systems: Foundations of harmony theory. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations. MIT Press, 1986. [3] Yoav Freund and David Haussler. Unsupervised learning of distributions of binary vectors using 2-layer networks. In John E. Moody, Steve J. Hanson, and Richard P. Lippmann, editors, Advances inNeural1nformation Processing Systems, volume 4, pages 912- 919. Morgan Kaufmann Publishers, Inc., 1992. [4] G. E. Hinton and T. J. Sejnowski. Learning and relearning in boltzmann machines. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations. MIT Press, 1986. [5] Y. LeCun, L. D. Jackel, L. Bottou, A. Brunot, C. Cortes, J. S. Denker, H. Drucker, 1. Guyon, U. A. Muller, E. Sackinger, P. Simard, and V. Vapnik. Comparison of learning algorithms for handwritten digit recognition. In F. Fogelman and P. Gallinari, editors, International Conference on Artificial Neural Networks, pages 53- 60, Paris, 1995. EC2 & Cie. [6] Chris J.C. Burges and B. SchOlkopf. Improving the accuracy and speed of support vector machines. In Michael C. Mozer, Michael 1. Jordan, and Thomas Petsche, editors, Advances in Neural Information Processing Systems, volume 9, page 375. The MIT Press, 1997.
2000
25
1,823
APRICODD: Approximate Policy Construction using Decision Diagrams Robert St-Aubin Dept. of Computer Science University of British Columbia Vancouver, BC V6T lZA staubin@cs.ubc.ca Jesse Hoey Dept. of Computer Science University of British Columbia Vancouver, BC V6T lZA jhoey@cs.ubc.ca Abstract Craig Boutilier Dept. of Computer Science University of Toronto Toronto, ON M5S 3H5 cebly@cs.toronto.edu We propose a method of approximate dynamic programming for Markov decision processes (MDPs) using algebraic decision diagrams (ADDs). We produce near-optimal value functions and policies with much lower time and space requirements than exact dynamic programming. Our method reduces the sizes of the intermediate value functions generated during value iteration by replacing the values at the terminals of the ADD with ranges of values. Our method is demonstrated on a class of large MDPs (with up to 34 billion states), and we compare the results with the optimal value functions. 1 Introduction The last decade has seen much interest in structured approaches to solving planning problems under uncertainty formulated as Markov decision processes (MDPs). Structured algorithms allow problems to be solved without explicit state-space enumeration by aggregating states of identical value. Structured approaches using decision trees have been applied to classical dynamic programming (DP) algorithms such as value iteration and policy iteration [7, 3]. Recently, Hoey et.al. [8] have shown that significant computational advantages can be obtained by using an Algebraic Decision Diagram (ADD) representation [1, 4, 5]. Notwithstanding such advances, large MDPs must often be solved approximately. This can be accomplished by reducing the "level of detail" in the representation and aggregating states with similar (rather than identical) value. Approximations of this kind have been examined in the context of tree structured approaches [2]; this paper extends this research by applying them to ADDs. Specifically, the terminal of an ADD will be labeled with the range of values taken by the corresponding set of states. As we will see, ADDs have a number of advantages over trees. We develop two approximation methods for ADD-structured value functions, and apply them to the value diagrams generated during dynamic programming. The result is a nearoptimal value function and policy. We examine the tradeoff between computation time and decision quality, and consider several variable reordering strategies that facilitate approximate aggregation. 2 Solving MDPs using Algebraic Decision Diagrams We assume a fully-observable MDP [10] with finite sets of states S and actions A, transition function Pr(s, a, t), reward function R, and a discounted infinite-horizon optimality criterion with discount factor 13. Value iteration can be used to compute an optimal stationary policy 7r : S -t A by constructing a series of n-stage-to-go value functions, where: V n+1(s) = R(s) + max {f3 L Pr(s, a, t) . Vn(t)} aEA tES (1) The sequence of value functions vn produced by value iteration converges linearly to the optimal value function V*. For some finite n, the actions that maximize Equation 1 form an optimal policy, and vn approximates its value. ADDs [1,4, 5] are a compact, efficiently manipulable data structure for representing realvalued functions over boolean variables En -t R. They generalize a tree-structured representation by allowing nodes to have multiple parents, leading to the recombination of isomorphic sub graphs and hence to a possible reduction in the representation size. A more precise definition of the semantics of ADDs can be found in [9]. Recently, we applied ADDs to the solution of large MDPs [8], yielding significant space/time savings over related tree-structured approaches. We assume the state of an MDP is characterized by a set of variables X = {Xl,' .. ,Xn }. Values of variable Xi will be denoted in lowercase (e.g., Xi). We assume each Xi is boolean'! Actions are described using dynamic Bayesian networks (DBNs) [6, 3] with ADDs representing their conditional probability tables. Specifically, a DBN for action a requires two sets of variables, one set X = {X I, ... , Xn} referring to the state of the system before action a has been executed, and X' = {X~, . .. ,X~} denoting the state after a has been executed. Directed arcs from variables in X to variables in X' indicate direct causal influence. The conditional probability table (CPT) for each post-action variable XI defines a conditional distribution PJc~ over XI-i.e., a's effect on Xi-for each instantiation of its parents. This can be viewed as a function PJc~ (Xl ... Xn), but where the function value (distribution) depends only on those Xj that ar~ parents of X:' We represent this function using an ADD. Reward functions can also be represented using ADDs. Figure I(a) shows a simple example of a single action represented as a DBN as well as a reward function. We use the method of Hoey et. al [8] to perform value iteration using ADDs. We refer to that paper for full details on the algorithm, and present only a brief outline here. The ADD representation of the CPTs for each action, PJc~ (X), are referred to as action diagrams, as shown in Figure 1 (b), where X represents the'set of pre-action variables, {Xl, ... X n}. These action diagrams can be combined into a complete action diagram (Figure I(c»: n pa(x"X) = IIX:. PJc;(X) + XI· (1- PJc;(X)). (2) i=l The complete action diagram represents all the effects of pre-action variables on postaction variables for a given action. The immediate reward function R(X') is also represented as an ADD, as are the n-stage-to-go value functions vn(x). Given the complete action diagrams for each action, and the immediate reward function, value iteration can be performed by setting VO = R, and applying Eq. 1, vn+1(x) = R(X) + ~a;: {f3L pa(X', X) . Vn(X')} , E x' (3) 1 An extension to multi-valued variables would be straightforward. ~ ~ x y X· ~ ~ ~~ ~ ~ :;~ ; Y Y' -------- -ttft y T 0 8 F 0 2 o~~ (a) x lrew,,,, T 10 P 0 MatrIx RepresentatIon > > > (b) ADD RepresentatJOn Complete ActIon Diagram (e) Figure 1: ADD representation of an MDP: (a) action network for a single action (top) and the immediate reward network (bottom) (b) Matrix and ADD representation of CPTs (action diagrams) (c) Complete action diagram. x x y z ~ jfJ l.l [H] ~ [2}J ~ 1 [9.7 .9 .8J I 1.1 (b) 0.1 x y (e) 0.5 Figure 2: Approximation of original value diagram (a) with errors of 0.1 (b) and 0.5 (c). followed by swapping all unprimed variables with primed ones. All operations in Equation 3 are well defined in terms of ADDs [8, 12]. The value iteration loop is continued until some stopping criterion is met. Various optimizations are applied to make this calculation as efficient as possible in both space and time. 3 Approximating Value Functions While structured solution techniques offer many advantages, the exact solution of MDPs in this way can only work if there are "few" distinct values in a value function. Even if a DBN representation shows little dependence among variables from one stage to another, the influence of variables tends to "bleed" through a DBN over time, and many variables become relevant to predicting value. Thus, even using structured methods, we must often relax the optimality constraint and generate only approximate value functions, from which near-optimal policies will hopefully arise. It is generally the case that many of the values distinguished by DP are similar. Replacing such values with a single approximate values leads to size reduction, while not significantly affecting the precision of the value diagrams. 3.1 Decision Diagrams and Approximation Consider the value diagram shown in Figure 2(a), which has eight distinct values as shown. The value of each state s is represented as a pair [l, u], where the lower, l, and upper, u, bounds on the values are both represented. The span of a state, s, is given by span(s)=u-l. Point values are represented by setting u=l, and have zero span. Now suppose that the diagram in Figure 2(a) exceeds resource limits, and a reduction in size is necessary to continue the value iteration process. If we choose to no longer distinguish values which are within 0.1 or 0.5 of each other, the diagrams in Figure 2(b) or (c) result, respectively. The states which had proximal values have been merged, where merging a set of states 81,82, ... ,8n with values [it, U1], ... , [ln, un], results in an aggregate state, t, with a ranged value [min(h, . .. , In), max:(u1, ... , un)]. The midpoint of the range estimates the true value of the states with minimal error, namely, 8pan( t) / 2. The span of V is the maximum of all spans in the value diagram, and therefore the maximum error in V is simply span ( V) / 2 [2]. The combined span of a set of states is the span of the pair that would result from merging them all. The extent of a value diagram V is the combined span of the portion of the state space which it represents. The span of the diagram in Figure 2(c) is 0.5, but its extent is 8.7. ADD-structured value functions can be leveraged by approximation techniques because approximations can always be performed directly without pre-processing techniques such as variable reordering. Of course, variable reordering can still play an important computational role in ADD-structured methods, but are not needed for discovering approximations. 3.2 Value Iteration with Approximate Value Functions Approximate value iteration simply means applying an approximation technique to the nstage to go value function generated at each iteration of Eq. 3. Available resources might dictate that ADDs be kept below some fixed size. In contrast, decision quality might require errors below some fixed value, referred to as the pruning strength, 8. The remainder of this paper will focus on the latter, although we have examined the former as well [9]. Thus, the objective of a single approximation step is a reduction in the size of a ranged value ADD by replacing all leaves which have combined spans less than the specified error bound by a single leaf. Given a leaf [l, u] in V, the set of all leaves [li, ud such that the combined span of [li, Ui] with [l, u] is less than the specified error are merged. Repeating this process until no more merges are possible gives the desired result. We have also examined a quicker, but less exact, method for approximation, which exploits the fact that simply reducing the precision of the values at the leaves of an ADD merges the similar values. We defer explanations to the longer version of this paper [9]. The sequence of ranged value functions, Vn , converges after n' iterations to an approximate (non-ranged) value function, V, by taking the mid-points of each ranged terminal node in Vn'. The pruning strength, 8, then gives the percentage difference between V and the optimal n'-stage-to-go value function V n'. The value function V induces a policy, n, the value of which is ViTo In general, however, ViT # V [11] 2. 3.3 Variable Reordering As previously mentioned, variable reordering can have a significant effect on the size of an ADD, but finding the variable ordering which gives rise to the smallest ADD for a boolean function is co-NP-complete [4]. We examine three reordering methods. The first two are standard for reordering variables in BDDs: Rudell's sifting algorithm and random reordering [12]. The last reordering method we consider arises in the decision tree induction literature, and is related to the information gain criterion. Given a value diagram V with extent 8, each variable x is considered in tum. The value diagram is restricted first with x = true, and the extent 8t and the number of leaves nt are calculated for the restricted ADD. Similar values 8 f and n f are found for the x = false restriction. If we collapsed the entire ADD into a single node, assuming a uniform distribution over values in the resulting 2In fact, the equality arises if and only if V = V·, where V· is the optimal value function. range gives us the entropy for the entire ADD: E = J p(v)log(p(v))dv = log(J), (4) and represents our degree of uncertainty about the values in the diagram. Splitting the values with the variable x results in two new value diagrams, for each of which the entropy is calculated. The gain in information (decrease in entropy) values are used to rank the variables, and the resulting order is applied to the diagram. This method will be referred to as the minimum span method. 4 Results The procedures described above were implemented using a modified version of the CUDD package [12] , a library of C routines which provides support for manipulation of ADDs. Experimental results from this section were all obtained using one processor on a dualprocessor Pentium II PC running at 400Mhz with O.5Gb of RAM. Our approximation methods were tested on various adaptations of a process planning problem taken from [7, 8].3 4.1 Approximation All experiments in this section were performed on problem domains where the variable ordering was the one selected implicitly by the constructors of the domains.4 Value 0 time iter nodes leaves IV" - V*I Function (%) (s) (inl) (%) Optimal 0 270.91 44 22170 527 0.0 1 562.35 44 17108 117 0.13 2 547.00 44 15960 77 0.14 3 1l2.7 15 15230 58 5.45 Approximate 4 68.53 12 14510 48 lo20 5 38.06 10 11208 38 2.48 10 6.24 6 3739 15 llo33 15 0.70 4 580 9 14.1l 20 0.57 4 299 6 16.66 30 0.05 2 50 3 25.98 40 0.07 2 10 2 30.28 50 0.04 1 0 1 3lo25 Table 1: Comparing optimal with approximate value iteration on a domain with 28 boolean variables. In Table 1 we compare optimal value iteration using ADDs (SPUDD as presented in [8]) with approximate value iteration using different pruning strengths J. In order to avoid overly aggressive pruning in the early stage of the value iterations, we need to take into account the size of the value function at every iteration. Therefore, we use a sliding pruning strength specified as J E~=o j3iextent(R) where R is the initial reward diagram, j3 is the discount factor introduced earlier and n is the iteration number. We illustrate running time, value function size (internal nodes and leaf nodes), number of iterations, and the average sum of squared difference between the optimal value function, V* , and the value of the approximate policy, Vir. It is important to note that the pruning strength is an upper bound on the approximation error. That is, the optimal values are guaranteed to lie within the ranges of the approximate 3 See [9] for details. 4Experiments showed that conclusions in this section are independent of variable order. ranged value function. However, as noted earlier, this bound does not hold for the value of an induced policy, as can be seen at 3% pruning in the last column of Table 1. The effects of approximation on the performance of the value iteration algorithm are threefold. First, the approximation itself introduces an overhead which depends on the size of the value function being approximated. This effect can be seen in Table 1 at low pruning strengths (1 - 2%), where the running time is increased from that taken by optimal value iteration. Second, the ranges in the value function reduce the number of iterations needed to attain convergence, as can be seen in Table 1 for pruning strengths greater than 2%. However, for the lower pruning strengths, this effect is not observed. This can be explained by the fact that a small number of states with values much greater (or much lower) than that of the rest of the state space may never be approximated. Therefore, to converge, this portion of the state space requires the same number of iterations as in the optimal case 5. The third effect of approximation is to reduce the size of the value functions, thus reducing the per iteration computation time during value iteration. This effect is clearly seen at pruning strengths greater than 2%, where it overtakes the cost of approximation, and generates significant time and space savings. Speed ups of 2 and 4 fold are obtained for pruning strengths of 3% and 4% respectively. Furthermore, fewer than 60 leaf nodes represent the entire state space, while value errors in the policy do not exceed 6%. This confirms our initial hypothesis that many values within a given domain are very similar and thus, replacing such values with ranges drastically reduces the size of the resulting diagram without significantly affecting the quality of the resulting policy. Pruning above 5% has a larger error, and takes a very short time to converge. Pruning strengths of more than 40% generate policies which are close to trivial, where a single action is always taken. 4.2 Variable reordering lO'r;===:Sh=:U:;;:"I=:gd;'=_=n=o=ra=Ord=;:e=, ===:::;--~------, -e- Intuitive (unshuffled) - no reorder ~ shuffled - reorder mlnspan o shuffled - reorder random -- shuffled - reorder sift o o la' o 102'-----___ ~ ___ ~ ___ ~ ___ ----' 15 20 25 30 35 boolesn variables Figure 3: Sizes of final value diagrams plotted as a function of the problem domain size. Results in the previous section were all generated using the "intuitive" variable ordering for the problem at hand. It is probable that such an ordering is close to optimal, but such orderings may not always be obvious, and the effects of a poor ordering on the resources required for policy generation can be extreme. Therefore, to characterize the reordering methods discussed in Section 3.3, we start with initially randomly shuffled orders and compare the sizes of the final value diagrams with those found using the intuitive order. 5We are currently looking into alleviating this effect in order to increase convergence speed for low pruning strengths In Figure 3 we present results obtained from approximate value iteration with a pruning strength of 3% applied to a range of problem domain sizes. In the absence of any reordering, diagrams produced with randomly shuffled variable orders are up to 3 times larger than those produced with the intuitive (unshuffled) order. The minimum span reordering method, starting from a randomly shuffled order, finds orders which are equivalent to the intuitive one, producing value diagrams with nearly identical size. The sifting and random reordering methods find orders which reduce the sizes further by up to a factor of 7. Reordering attempts take time, but on the other hand, DP is faster with smaller diagrams. Value iteration with the sifting reordering method (starting with shuffled orders) was found to run in time similar to that of value iteration with the intuitive ordering, while the other reordering methods took slightly longer. All reordering methods, however, reduced running times and diagram sizes from that using no reordering, by factors of 3 to 5. 5 Concluding Remarks We examined a method for approximate dynamic programming for MDPs using ADDs. ADDs are found to be ideally suited to this task. The results we present have clearly shown their applicability on a range of MDPs with up to 34 billion states. Investigations into the use of variable reordering during value iteration have also proved fruitful, and yield large improvements in the sizes of value diagrams. Results show that our policy generator is robust to the variable order, and so this is no longer a constraint for problem specification. References [1] R. Iris Bahar, Erica A. Frohm, Charles M. Gaona, Gary D. Hachtel, Enrico Macii, Abelardo Pardo, and Fabio Somenzi. Algebraic decision diagrams and their applications. In International Conference on Computer-Aided Design, pages 188- 191. IEEE, 1993. [2] Craig Boutilier and Richard Dearden. Approximating value trees in structured dynamic programming. In Proceedings ICML-96, Bari, Italy, 1996. [3] Craig Boutilier, Richard Dearden, and Moises Goldszmidt. Exploiting structure in policy construction. In Proceedings Fourteenth Inter. Conf on AI (IJCAI-95), 1995. [4] Randal E. Bryant. Graph-based algorithms for boolean function manipulation. IEEE Transactions on Computers, C-35(8):677--691, 1986. [5] E. M. Clarke, K. L. McMillan, X. Zhao, M. Fujita, and J. Yang. Spectral transforms for large boolean functions with applications to technology mapping. In DAC, 54-60. ACMIIEEE, 1993. [6] Thomas Dean and Keiji Kanazawa. A model for reasoning about persistence and causation. Computational Intelligence, 5(3):142- 150, 1989. [7] Richard Dearden and Craig Boutilier. Abstraction and approximate decision theoretic planning. Artificial Intelligence, 89:219- 283, 1997. [8] Jesse Hoey, Robert St-Aubin, Alan Hu, and Craig Boutilier. SPUDD: Stochastic planning using decision diagrams. In Proceedings of UAI99, Stockholm, 1999. [9] Jesse Hoey, Robert St-Aubin, Alan Hu, and Craig Boutilier. Optimal and approximate planning using decision diagrams. Technical Report TR-OO-05, UBC, June 2000. [10] Martin L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, New York, NY., 1994. [11] Satinder P. Singh and Richard C. Yee. An upper bound on the loss from approximate optimalvalue function. Machine Learning, 16:227- 233, 1994. [12] Fabio Somenzi. CUDD: CU decision diagram package. Available from ft p : / /vl s i . c ol or ado. edu/pub /, 1998.
2000
26
1,824
Four-Iegged Walking Gait Control Using a Neuromorphic Chip Interfaced to a Support Vector Learning Algorithm Susanne Still NEC Research Institute 4 Independence Way, Princeton NJ 08540, USA sasa@research.nj.nec.com Klaus Hepp Institute of Theoretical Physics ETH Zurich, Switzerland Abstract Bernhard Scholkopf Microsoft Research Institute 1 Guildhall Street, Cambridge, UK bsc@scientist.com Rodney J. Douglas Institute of Neuroinformatics ETHIUNI Zurich, Switzerland To control the walking gaits of a four-legged robot we present a novel neuromorphic VLSI chip that coordinates the relative phasing of the robot's legs similar to how spinal Central Pattern Generators are believed to control vertebrate locomotion [3]. The chip controls the leg movements by driving motors with time varying voltages which are the outputs of a small network of coupled oscillators. The characteristics of the chip's output voltages depend on a set of input parameters. The relationship between input parameters and output voltages can be computed analytically for an idealized system. In practice, however, this ideal relationship is only approximately true due to transistor mismatch and offsets. Fine tuning of the chip's input parameters is done automatically by the robotic system, using an unsupervised Support Vector (SV) learning algorithm introduced recently [7]. The learning requires only that the description of the desired output is given. The machine learns from (unlabeled) examples how to set the parameters to the chip in order to obtain a desired motor behavior. 1 Introduction Modem robots still lag far behind animals in their capability for legged locomotion. Fourlegged animals use distinct walking gaits [1], resulting for example in reduction of energy consumption at high speeds [5]. Similarly, the use of different gaits can allow legged robots to adjust their walking behavior not only for speed but also to the terrain they encounter. Coordinating the rhythmic movement patterns necessary for locomotion is a difficult task involving a large number of mechanical degrees of freedom (DOF) and input from many sensors, and considerable advantages may be gained by emulating control architectures found in animals. Neuroscientists have found increasingly strong evidence during the past century to support the hypothesis that centers in the nervous system, called Central Pattern Generators (CPGs), generate rhythmic output responsible for coordinating the large number of muscles needed for locomotion [2]. CPGs are influenced by signals from the brain stem and the cerebellum, brain structures in which locomotive adaptation is believed to take place [11]. This architecture greatly simplifies the control problem for the brain. The brain only needs to set the general level of activity, leaving it to the CPG to coordinate A r-------------------------------~ B SV Learning Algorithm /' "Parameter setting program Data acquisition program Brain ~ t DAC Input port I t 1 l'--__ w_a_lki_·_ng, G_ a_i_t c_o--.n_tr_ol_c_h_ip __ Jl I j I Motors Sensors Muscles Sensors Figure 1: A: Sketch of the control architecture of the robot. Thick arrows indicate the learning loop. B: Sketch of simplified control architecture for locomotion of vertebrates. the complex pattern of muscle activity required to generate locomotion [3]. We make use of these biological findings by implementing a similar control architecture to control a walking machine. A neuromorphic Gait Controller (GC) chip produces time varying output voltages that control the movement of the robot's legs. Their frequency, phase relationships and duty cycles determine the resulting walking behavior (step frequency, walking gait and direction of motion) and depend on a small set of control parameters. For an ideal system, the relationship between control parameters and output voltages can be determined analytically, but deviations of the chip from ideal have to be compensated for by other means. Since the goal here is that the resulting machine works autonomously, we propose a learning procedure to solve this problem. The robot is given the specifications of the desired movement sequence and explores parameter combinations in the input parameter space of its GC chip, keeping only those leading to a movement that is correct within some tolerance. It then locates the region in input parameter space that contains most of these parameter combinations using an algorithm [7] that extends SV learning to unlabelled data. 2 The robotic system The robotic system consists of (i) a body with one degree of freedom per leg l and a potentiometer attached to each motor that serves as a sensor providing information about the angular displacement of the leg, (ii) the neuromorphic Gait Controller (GC) chip and (iii) a PC on which algorithms are run to (a) acquire data from chip and sensors, (b) set the chip's input parameters and (c) implement the learning algorithm. The control architecture is inspired by the architecture which is used for locomotion in vertebrates (see Fig. 1 and Sec. 1). Like in biology, the existence of the GC chip considerably simplifies the control task. The computer only needs to set the input parameters to the GC chip, leaving the chip to coordinate the pattern of motor movements necessary to generate locomotion of the robot. The circuitry on the GC chip is based on an analysis [8] of circuits originating from M. W. Tilden (e.g. [4]). The GC chip contains five oscillators which can be inter-connected in different ways. The chip can be used in three reasonable configurations, one of which, a chain of four oscillators (Fig. 2), is used in the present work. Each oscillator (see Fig. 3) consists of two similar sub-circuits. In the following, subscripts i E {I, .. ,4} will denote the oscillator identity and subscripts k E {l, r} will denote one of the two sub-circuits within an oscillator. Here l stands for the left side of the oscillator circuit and r for the right side. Each sub-circuit has a capacitor connecting an input node to a node Vi,k, to which the input node of an inverter is connected. The output node of the inverter is called Vout,i,k. lThe robot's aluminum body is 12 cm long and 6.2 cm wide. It has four DC motors which drive aluminum legs attached at right angles to the plane of the body. Each leg ends in a foot that contains a small electromagnet which is activated during the stance phase of the leg and deactivated during the swing phase. Leg and foot together have a length of 6.5 cm. The robot walks on a metal floor so that the electromagnet increases the friction during the stance phase. For further details see [7]. Left Front Leg ~ OUT0 BIAS Left Hind Leg BIA~ ~ OUT BIAS Right Front Leg Right Front Leg Figure 2: Sketch of the configuration in which the chip is used: a chain of four coupled oscillators. Each of the thin lines is connected to a pad. The round symbols stand for oscillators numbered corresponding to the text. The thick arrows stand for the transmission gates which couple the oscillators (see circuit diagram in Fig. 3). The arrows that lead to the four legs represent the outputs of the oscillators. Finally, a n-FET transistor with gate voltage Vb,i,k is connected between Vi,k and ground. An oscillator is obtained by connecting the input node of one sub-circuit to the output node of the other and vice versa. The output voltages of a single oscillator are two mirror-image step functions at Vout,i,l and Vout,i,r' These voltages control the stepping movements of one leg. Two oscillators, j and j + 1 (j E {I, '" 3}), are coupled with two transmission gates. One is connected between Vout,j,l and Vi+l,l. The current that flows through it depends on the bias voltage Vi"j j+1,1' Likewise, the other transmission gate connects Vout,j,r and Vi+1,r and has the bias voltage Vb,j j+1,r' Note that the coupling is asymmetric, affecting only oscillator j + 1. The voltages at the input nodes to the inverters of oscillator j are not affected by the coupling since the inverters act as impedance buffers. The chip's output is characterized by the frequency (common to all oscillators), the four duty cycles of the oscillators and three phase lags between oscillators. The phase lags determine which gait the robot adopts. The duty cycles of the oscillators set the ratio between stance and swing phase of the legs. Certain combinations of duty cycles differing from 50% make the robot turn [8]. For a set of constant input parameters, {Vb,i,r, Vb,i,l, Vb,j j+1,r, Vb,j j+l,l}, a rhythmic output is produced with oscillation period P, duty cycles Di and phase shifts ¢j, where i E {I, ",4} and j E {I, '" 3}. Analysis of the resulting circuit reveals [8] how the output characteristics of the chip depend on the input parameters. Assume that all transistors on the chip are identical and that the peak voltages at node V1,1 and at node V1,r are identical and equal to Vmax . For a certain range of input parameters, the period of the oscillators is given by the period of the first oscillator in the chain (called the master oscillator) ..!L..!L P = C(Vmax - vth)( e - kTK.Vb,l,1 + e - kTK.Vb,l,r)/ Ion (1) where C = 5.159 X 10-10 F is the capacitance and Ion is the drain source leakage current of the n-FET. The threshold voltage of the inverter, vth = 1. 345V, is calculated from the process parameters [9]. Vmax = 3.23V, Ion = 2.2095 X 10-16 A and K. = 0.6202 are estimated with a least squares fit to the data (Fig. 4a). T is the temperature, k the Boltzmann constant and q the electron charge. Let the duty cycle be defined as the fraction of the period during which Vout,i,l is high. The master oscillator's duty cycle is D1 = 1/[1 + ePrK.(Vb,l,r - Vb,l,I)] (2) A very simple requirement for the controller is to produce a symmetric waveform for straight forward locomotion. For this, all oscillators must have a duty cycle of 112 (=50%) [8]. This can be implemented by a symmetric circuit (identical control voltages on both right and left side: Vb,j j+1,1 = Vb,j j+l,r =: Vb,j j+l Vj E {I, '" 3} and Vb,i,l = Vb,i,r =: Vb,i Vi E {I, ",4}). For simplicity, let Vb,i = Vb Vi. Then the phase lag between oscillators j and j + 1 is given by (compare Fig. 4b) 1 kT /2q(Vmax - vth) ['Y(vth) - p,(vth)ePrK.(Vb,j j+1 + Vb)] ¢j = - + ..!L ( ) In ..!L ( ) (3) 2 /3e- kT K. Vb,jj+l+Vb -1 'Y(Vo)-p,(Vo)ekTK.Vb,jj+l+Vb Transmission Gate OSCILLA TOR j Vout,j+l,r OSCILLA TOR j+ 1 Figure 3: Two oscillators are coupled through a transmission gate. The gate voltage on the n-FET of each transmission gate is set to be the complementary voltage of the p-FET of the same transmission gate by the circuits which are drawn next to the transmission gates. These circuits are controlled by the bias voltages Vb,i HI,I and Vb,i i+1,r and copy the voltages Vb,j+1,1 and Vb,j+1 ,r, to nodes 2 and 4, respectively, while the voltages at nodes 1 and 3 are (Vdd - Vb,HI,I) and (Vdd - Vb,HI,r), respectively. The symbols on the right correspond to the symbols in Fig. 2. where Va = 0.1 V and (3 = lop ePr"-Vdd / Ion; ')'(V) = (Ion + lop ePr V) ePr"-Vdd; JL(V) = Ion ePr V 3 Learning In theory, all duty cycles should be 112 for the symmetric circuit. In the real system, the duty cycle changes with the phase lag (see Fig. 4c) due to transistor mismatch. Thus, to obtain a duty cycle of 112, Vb,i HI,1 might have to differfrom Vb,j HI,r . Parameter points which lead to both a desired phase lag and a duty cycle of 112 lie in a two dimensional space spanned by Vb,j HI,I and Vb,j j+1,r. These parameters are learned2. First, a subset X in this input parameter space is chosen according to the estimate given by (3). X is scanned and at each point, the output characteristics of the GC chip are determined. If they match the desired output characteristics (within specified tolerances), this point is added to the training data set V eX. After the scan is completed, the training data is transformed by a feature map <1> : X -+ F , into a feature space F such that if x, y EX, the dot 2The desired duty cycle of 1/2 is an example, leading to forward locomotion for of the test robot. In the same way, any other value for the duty cycle can be learned (for examples see [8]) 12 r-,----,----,----.,----.,----.,----, -.j .. ---.l.----.l.---.. l .. ----I [.. , , .. ~ ....... ,. 0.5 ~c:::;---,--,--,........,---,---,---,--, N::;~~'~~;I;! '" 8 .... j ...... + ...... j .... ~ ..... ; .... ; .... . ;;: 6 .... ·····t·····j , .j .... j ..... : : : : : i 4 ---_._( ·-t------)_·_---+------:----·':'·----2 ... j ............................. ..... . o ; 0.9 t·· ····r·· ~.-... -.! ...... -{ .. -.... + ... -... ~ ..... -.! ....... ~ .... . 0.8 f·····+·····+····+·····f-······f····· ~~ o. 7 ~:::::::l::::::t::::::t:::::::~:::::-+-· ····r::::: ;--.-.--j----.-.~-----.-+.----- ' --~ .... -.- . 0.6 t······:·······;· ; 0.5 ..... ' ----'-----'--'---'---'----'----' A :::::::::::::r::r:::T:::r"::;::::;::::;::::' 0.2 ..... : ... 1...1....1...1....1 ::r:::::::r:: 0.1 :::::;:::::::)::::::::::::::; .... ] ..... ( .. ] ..... 0.3 0.4 0.5 0.6 4.22 4.24 4.26 4.28 00.5 ' 0.6 0.7 0.8 0.9 1 (b) Vb,12/V (c) <1>1 Figure 4: (a): Oscillation period P (points = data, 3% error) as a function of the bias voltage Vb i 1 = Vb i r =: Vb. Vb i 1 = Vb i r implies that the duty cycle of the oscillation is 112 (see (2». The ~s~iIIation peri~d follo~s (1) (solid line). (b): Phase lag between the first two oscillators in a chain of 4 oscillators. Function given in (3) (solid line) and data points. Vmax and Ion are as determined in (a), lop, as estimated by data fitting is 1.56 x 10-19 A. (c): Duty cycle of the second oscillator in a chain of 4 oscillators as a function of the phase lag between oscillators 1 and 2. product (<p(x) . <p(y)) can be computed by evaluation of the Gaussian kernel (which fulfills Mercer's condition [6]) k(x,y) = (<p(x)· <p(y)) = e-lIx-yIl2/2u2 (4) In feature space, a hyperplane (w . <p(x)) - p = 0 separating most of the data from the origin with large margin is found by solving the constrained optimization problem (see [7]) mill !llwl1 2 + tr L~=1 ei - P wEF,eEIRI,pEIR subject to (w· <p(v)) ~ p - ei, ei ~ 0 (5) (6) A decision function, f, is computed, which is + 1 on a region in input space capturing most of the training data points and -1 elsewhere. The approximate geometrical center of this region is used as the input to the GC chip. The algorithmic implementation of the learning procedure uses the quadratic optimizer LOQO implementing a primal-dual interior-point method [10]. The parameter v upper bounds the fraction of outliers (see [7], Proposition 4), which is related to the noise that the training data is subject to. In our experiments, v = 0.2 is chosen such that the algorithm disregards approximately as many points as can be expected to be falsely included in the training data given the noise of the data acquisition. 4 Results As an example, the input parameters are learned for a forward walk, requiring phase shifts of qJt = rP2 = rP3 = 0.75 and duty cycles of Dl = D2 = D3 = D4 = 0.5. The oscillation period P = 0.89s and the duty cycle Dl = 0.5 are set according to (1) and (2). The value of P takes the mechanics of the robot into account [8]. The scanning step size is 2mV and the tolerances are chosen to be ±0.015 for the phase lags and ±0.05 for the duty cycles. The parameters Vb,j Hl,l and Vb,j Hl,r are learned in sequence, first for j = 1 (see Fig. 5). The result is applied to the GC chip. Then Vb ,23,l and Vb,23,r are learned and the result is also applied to the GC chip. Finally, Vb,34,l and Vb,34,r are learned. All input parameters of the GC chip are set to the learned values and the robot moves forward using a walk gait (see Fig. 6). The phase relationships of the robot's leg movements are measured. Simultaneously, the robot's trajectory is tracked using a video camera monitoring the robot from above, and two Light Emitting Diodes (LEDs) attached to the robot's front and rear. The robot has learned to move in the forward direction, using a walk gait, as desired, despite the inability to theoretically predict the exact values of the GC chip's bias voltages. 4.3 4.3 4.285 4.285 ( .. · . · . . . · . ~-4.27 ~ 4.27 · ... .... > > 0 4.255 4.255 -:. 4.24'-:--~c::---,-':--c-----:--"-=----,-' 4.28 4.295 4.31 4.325 4.34 4.24 '-:--~c::---,-':--c-----:--"-=----,-' 4.28 4.295 4.31 4.325 4.34 Vb,12,r Vb,12,r Figure 5: Result of learning values of the bias voltages Vb,12 ,1 and Vb,12,r which lead to qh = 0.75 and D2 = 0.5. Correctly classified (stars) and misclassified (crosses) training data (left) and test data (right). Outlined regions: learned by algorithm from training data. The training data is obtained from one scan of the displayed rectangular region X. The test data is a set of points obtained from three scans. 5 Discussion We have introduced a novel neuromorphic chip for inter-leg coordination of a walking machine. This chip successfully controls a four-legged robot. A Support Vector algorithm enables tile robotic system to learn a desired movement sequence. We have demonstrated this here using the walk gait as an example. Other gaits have also been learned [8]. The architecture we used reduced the learning of a complex motor behavior to a classification task. The classifier we used requires only a few examples, making the learning efficient, and it can handle noisy data, making the learning robust. The chip need not be interfaced to a computer; it can control the robot without any need of software once the input parameters of the chip are known. Note, that the chips bias voltages can also be changed by simple sensors in a direct way, enabling the robot to adapt its behavior according to sensory information. This point is elaborated in [8]. However, the chip-computer interface creates a hybrid system in which the complex movement pattern required to make a four legged machine locomote is controlled by the chip, while algorithms run on the computer can focus on more demanding tasks. This architecture enables the robotic system to exploit the motor abilities it has due to the GC chip - independent of tile particular physical shape of the robot. The hybrid system could also be useful for the development of a second generation of neuromorphic motor control chips, able to solve more complex tasks. Furthermore, the control circuit could easily be extended to the control of six (or more) legs simply by addition of two (or more) oscillators, without increasing drastically in complexity, as the number of control parameters is small and scales linearly with the number of oscillators. Similarly, the circuit could be expanded to control n-jointed legs if each of the four oscillators becomes itself the master of a chain of n oscillators. Finally, the learning procedure introduced here could be used as a general method for fine tuning of neuromorphic a VLSI chips. Acknowledgments S. S. is grateful to the late Misha Mahowald for inspiring discussions and indebted to Mark W. Tilden for discussing his circuits. We thank Adrian M. Whatley for useful comments and technical assistance. For helpful discussions we thank William Bialek, Gert Cauwenberghs, Giacomo Indiveri, Shih-Chii Liu, John C. Platt, Alex 1. Smola, John Shawe-Taylor and Robert C. Williamson. S. S. was supported by CSEM, Neuchiitel, the Physics Department of ETH Zurich and tile SPP of tile Swiss National Science Foundation. .• +.o 00 o 0 • 0 . , + + + 0 LF() RF (o) LH (+) RH ( O) + .. + : 0 -00 0 0 + + ___ c ,~ , ~-7. ,,~~~'~O~ ,,~~~~~ __ ~~ tme l s ! ~ lIJIlI IllI [ III! JlIIIlIlllllllII It !IIlH H! !I III! I1mIlU H II HI UJ .... 0 3 time / s Figure 6: Left: Control voltages (upper plot) and angular displacements of the legs as measured by potentiometers that are attached to the motors (lower plot) as a function of time shown for a cycle of the rhythmic movement. The four legs are distinguished by the abbreviations: left front (LF; dots), right front (RF; circles), left hind (LH; crosses) and right hind (RH; stars). The legs move in succession with a phase shift of 90°: LF, RH, RF, and finally LH, a typical walk sequence [1]. Note that the data is acquired with a limited sampling rate. Thus the duty cycles of the control voltages appear to deviate from 50 %. However, the data on the right proves that the duty cycles are sufficiently close to 50 % to cause the robot to walk forward in a straight line, as desired. Right: Position of the robot's center of gravity as a function of time. Upper plot: x-coordinate, lower plot: y-coordinate. Errors are due mainly to the extension of the images of the LEDs in the image frames obtained from the CCD camera. The y-coordinate is constant within the error. This shows that the robot moves forward on a straight line. The robot moves at roughly 4.7 cm S-l . References [1] R. McN. Alexander, The Gaits of Bipedal and Quadrupedal Animals. Intl. 1. Robotics Research, 1984,3, pp. 49-59 [2] F. De1comyn, Neural Basis of Rhythmic Behaviour in Animals. Science, 1980, 210, pp. 492-498 [3] S. Grillner, 1981, Control oflocomotion in bipeds, tetrapods and fish. In: Handbook of Physiology II, M. D. Bethesda (ed.), Am. Physiol. Soc., pp. 1179-1236; S. Grillner, 1998, Vertebrate Locomotion - A Lamprey Perspective. In: Neuronal Mechanisms for Generating Locomotor Activity, O. Kiehn et. aI. (eds.), New York Academy of Science [4] B. Hasslacher & M. W. Tilden, Living Machines. Robotics and Autonomous Systems: The Biology and Technology of Intelligent Autonomous Agents, 1995, L. Steels (ed.), Elsevier; S. Still & M. W. Tilden Controller for a four legged walking machine. In: Neuromorphic Systems, 1998, L. S. Smith & A. Hamilton (eds.), World Scientific [5] D. F. Hoyt & R. C. Taylor, Gait and the energetics of locomotion in horses. Nature, 1981, 292, pp.239-240 [6] J. Mercer, Functions of positive and negative type and their connection with the theory of integral equations. Phil. Trans. Roy. Soc. London A, 1909, 209, pp. 415-446 [7] B. Scholkopf, J. C. Platt, J. Shawe-Taylor, A. J. Smola and R. C. Williamson, Estimating the Support of a High-Dimensional Distribution. Technical Report, Microsoft Research, 1999, MSRTR-99-87, Redmond, WA, To appear in Neural Computation. [8] S. Still, Walking Gait Control for Four-legged Robots, PhD Thesis, ETH Ziirich, 2000 [9] N. H. E. Weste & K. Eshraghian, Principles of CMOS VLSI Design, 1993, Addison Wesley [10] R. J. Vanderbei, LOQO User's Manual - Version 3.10, Technical Report, SOR-97-08, Princeton University, Statistics and Operations Research, 1997 [11] D. Yanagihara, M. Udo, I. Kondo and T. Yoshida, Neuroscience Research, 1993, 18, pp. 241244
2000
27
1,825
A Mathematical Programming Approach to the Kernel Fisher Algorithm Sebastian Mika*, Gunnar Ratsch*, and Klaus-Robert Miiller*+ *GMD FIRST.lDA, KekulestraBe 7, 12489 Berlin, Germany +University of Potsdam, Am Neuen Palais 10, 14469 Potsdam {mika, raetsch, klaus}@jirst.gmd.de Abstract We investigate a new kernel-based classifier: the Kernel Fisher Discriminant (KFD). A mathematical programming formulation based on the observation that KFD maximizes the average margin permits an interesting modification of the original KFD algorithm yielding the sparse KFD. We find that both, KFD and the proposed sparse KFD, can be understood in an unifying probabilistic context. Furthermore, we show connections to Support Vector Machines and Relevance Vector Machines. From this understanding, we are able to outline an interesting kernel-regression technique based upon the KFD algorithm. Simulations support the usefulness of our approach. 1 Introduction Recent years have shown an enormous interest in kernel-based classification algorithms, primarily in Support Vector Machines (SVM) [2]. The success of SVMs seems to be triggered by (i) their good generalization performance, (ii) the existence of a unique solution, and (iii) the strong theoretical background: structural risk minimization [12], supporting the good empirical results. One of the key ingredients responsible for this success is the use of Mercer kernels, allowing for nonlinear decision surfaces which even might incorporate some prior knowledge about the problem to solve. For our purpose, a Mercer kernel can be defined as a function k : IRn x IRn --+ IR, for which some (nonlinear) mapping ~ : IRn --+ F into afeature ,space F exists, such that k(x, y) = (~(x) . ~(y)). Clearly, the use of such kernel functions is not limited to SVMs. The interpretation as a dot-product in another space makes it particularly easy to develop new algorithms: take any (usually) linear method and reformulate it using training samples only in dot-products, which are then replaced by the kernel. Examples thereof, among others, are Kernel-PCA [9] and the Kernel Fisher Discriminant (KFD [4]; see also [8, 1]). In this article we consider algorithmic ideas for KFD. Interestingly KFD - although exhibiting a similarly good performance as SVMs - has no explicit concept of a margin. This is noteworthy since the margin is often regarded as explanation for good generalization in SVMs. We will give an alternative formulation of KFD which makes the difference between both techniques explicit and allows a better understanding of the algorithms. Another advantage of the new formulation is that we can derive more efficient algorithms for optimizing KFDs, that have e.g. sparseness properties or can be used for regression. 2 A Review of Kernel Fisher Discriminant The idea of the KFD is to solve the problem of Fisher's linear discriminant in a kernel feature space F , thereby yielding a nonlinear discriminant in the input space. First we fix some notation. Let {Xi Ii = 1, ... ,e} be our training sample and y E {-1, 1}l be the vector of corresponding labels. Furthermore define 1 E ~l as the vector of all ones, 11,12 E ~l as binary (0,1) vectors corresponding to the class labels and let I, I l , andI2 be appropriate index sets over e and the two classes, respectively (with ei = IIil). In the linear case, Fisher's discriminant is computed by maximizing the coefficient J( w) = (WTSBW)/(WTSww) of between and within class variance, i.e. SB = (m2 - mt)(m2mll and Sw = Lk=1,2 LiEIk (Xi - mk)(Xi - mkl, where mk denotes the sample mean for class k. To solve the problem in a kernel feature space F one needs a formulation which makes use of the training samples only in terms of dot-products. One first shows [4], that there exists an expansion for w E F in terms of mapped training patterns, i.e. (1) Using some straight forward algebra, the optimization problem for the KFD can then be written as [5]: (o.TIL) 2 o.TMo. J(o.) = o.TNo. = o.TNo.' (2) where ILi = t K1i' N = KKT - Li=1,2eiILiILY, IL = IL2 - ILl' M = ILILT, and Kij = (<P(Xi) . <p(Xj)) = k(Xi' Xj). The projection of a test point onto the discriminant is computed by (w . <p(x)) = LI Qi k(Xi' x). As the dimension of the feature space is usually much higher than the number of training samples e some form of regularization is necessary. In [4] it was proposed to add e.g. the identity or the kernel matrix K to N, penalizing 110.11 2 or Ilw11 2, respectively (see also [3]). There are several equivalent ways to optimize (2). One could either solve the generalized eigenproblem M 0. = ANa, selecting the eigenvector 0. with maximal eigenvalue A, or compute 0. == N- l (1L2 -ILl)' Another way which will be detailed in the following exploits the special structure of problem (2). 3 Casting KFD into a Quadratic Program Although there exist many efficient off-the-shelve eigensolvers or Cholesky packages which could be used to optimize (2) there remain two problems: for a large sample size e the matrices Nand M become unpleasantly large and the solutions 0. are non-sparse (with no obvious way to introduce sparsity in e.g. the matrix inverse). In the following we show how KFD can be cast as a convex quadratic programming problem. This new formulation will prove helpful in solving the problems mentioned above and makes it much easier to gain a deeper understanding of KFD. As a first step we exploit the facts that the matrix M is only rank one, i.e. 0. TM 0. = (0. T(IL2 - ILl))2 and that with 0. any multiple of 0. is an optimal solution to (2). Thus we may fix aT (IL2 - ILl) to any non-zero value, say 2 and minimize 0. TN 0.. This amounts to the following quadratic program: min subject to: '" o.TNo. + CP(o.) o.T(IL2-IL1) = 2. (3) (3a) The regularization formerly incorporated in N is made visible explicitly here through the operator P, where C is a regularization constant. This program still makes use of the rather un-intuitive matrix N. This can be avoided by our final reformulation which can be understood as follows: Fisher's Discriminant tries to minimize the variance of the data along the projection whilst maximizing the distance between the average outputs for each class. Considering the argumentation leading to (3) the following quadratic program does exactly this: subject to: min a,b,E Ko: + lb lIe y+e o fori = 1,2 (4) (4a) (4b) for 0:, e E Rl , and b, C E ~ C ~ O. The constraint (4a), which can be read as (w . Xi) + b = Yi + ~i for all i E I, pulls the output for each sample to its class-label. The term IIel12 minimizes the variance of the error committed while the constraints (4b) ensure that the average output for each class is the label, i.e. for ±llabels the average distance of the projections is two. The following proposition establishes the link to KFD: Proposition 1. For given C E TIt any optimal solution 0: to the optimization problem (3) is also optimal for (4) and vice versa. The formal, rather straightforward but lengthy, proof of Proposition 1 is omitted here. It shows (i) that the feasible sets of (3) and (4) are identical with respect to 0: and (ii) that the objective functions coincide. Formulation (4) has a number of appealing properties which we will exploit in the following. 4 A Probabilistic Interpretation We would like to point out the following connection (which is not specific to the formulation (4) ofKFD): The Fisher discriminant is the Bayes optimal classifier for two normal distributions with equal covariance (i.e. KFD is Bayes optimal for two Gaussian in feature space.). To see this connection to Gaussians consider a regression onto the labels of the form (w . <f>(x)) + b, where w is given by (1). Assuming a Gaussian noise model with variance u the likelihood can be written as 1 1 p(ylo:, u2) = exp( - 2u2 L)(w . <f>(Xi)) + b - Yi)2) = exp( - 2u21IeI12). i Now, assume some prior p(o:IC) over the weights with hyper-parameters C . Computing the posterior we would end up with the Relevance Vector Machine (RVM) [11]. An advantage of the RVM approach is that all hyper-parameters u and C are estimated automatically. The drawback however is that one has to solve a hard, computationally expensive optimization problem. The following simplifications show how KFD can be seen as an approximation to this probabilistic approach. Assuming the noise variance u is known (i.e. dropping all terms depending solely on u) and taking the logarithm of the posterior p(ylo:, u2)p(0:IC), yields the following optimization problem min IIel12 -log(P(o:IC)), a,b (5) subject to the constraint (4a). Interpreting the prior as a regularization operator P, introducing an appropriate weighting factor C, and adding the two zero-mean constraints (4b) yields the KFD problem (4). The latter are necessary for classification as the two classes are independently assumed to be zero-mean Gaussians. This probabilistic interpretation has some appealing properties which we outline in the following: Interpretation of outputs The probabilistic framework reflects the fact, that the outputs produced by KFD can be interpreted as probabilities, thus making it possible to assign a confidence to the final classification. This is in contrast to SVMs whose outputs can not directly be seen as probabilities. Noise models In the above illustration we assumed a Gaussian noise model and some yet unspecified prior which was then interpreted as regularizer. Of course, one is not limited to Gaussian models. E.g. assuming a Laplacian noise model we would get Ilelh instead of Ilell~ in the objective (5) or (4), respectively. Table 1 gives a selection of different noise models and their corresponding loss functions which could be used (cf. Figure 1 for an illustration). All of them still lead to convex linear or quadratic programming problems in the KFD framework. Table 1: Loss functions for the slack variables e and their corresponding density/noise models in a probabilistic framework [10]. c-ins. Laplacian Gaussian Huber's II loss function density model 1~le 2d+E"l exp( -1~le) I~I ~ exp(-IW ~e ~ exp(-s;.) {le e if I~I :::; a 2". < exp(-2".) I~I- ~ exp(~ -IW otherwise Regularizers Still open in this probabilistic interpretation is the choice of the prior or regularizer p(aIC). One choice would be a zero-mean Gaussian as for the RVM. Assuming again that this Gaussians' variance C is known and a multiple of the identity this would lead to a regularizer of the form P(a) = Iia 112. Crucially, choosing a single, fixed variance parameter for all a we would not achieve sparsity as in RVM anymore. But of course any other choice, e.g. from Table 1 is possible. Especially interesting is the choice of a Laplacian prior which in the optimization procedure would correspond to a h -loss on the a's, i.e. P(a) = Iialh. This choice leads to sparse solutions in the KFD as the h-norm can be seen as an approximation to the lo-norm. In the following we call this particular setting sparse KFD (SKFD). Figure 1: Illustration of Gaussian, Laplacian, Huber's robust and c-insensitive loss functions (dotted) and corresponding densities (solid). Regression and connection to SVM Considering the program (4) it is rather simple to modify the KFD approach forregression. Instead of ±1 outputs y we now have real-valued y's. And instead of two classes there is only one class left. Thus, we can use KFD for regression as well by simply dropping the distinction between classes in constraint (4b). The remaining constraint requires the average error to be zero while the variance of the errors is minimized. This as well gives a connection to SVM regression (e.g. [12]), where one uses the cinsensitive loss for e (cf. Table 1) and a K-regularizer, i.e. P(a) = aTKa = Ilw112. Finally, we can as well draw the connection to a SVM classifier. In SVM classification one is maximizing the (smallest) margin, traded off against the complexity controlled by Ilw112. Contrary, besides parallels in the algorithmic formulation, in KFD is no explicit concept of a margin. Instead, implicitly the average margin, i.e. the average distance of samples from different classes, is maximized. Optimization Besides a more intuitive understanding, the formulation (4) allows for deriving more efficient algorithms as well. Using a sparsity regularizer (i.e. SKFD) one could employ chunking techniques during the optimization of (4). However, the problem of selecting a good working set is not solved yet, and contrary to e.g. SVM, for KFD all samples will influence the final solution via the constraints (4a), not just the ones with ai I:- O. Thus these samples can not simply be eliminated from the optimization problem. Another interesting option induced by (4) is to use a sparsity regularizer and a linear loss function, e.g. the Laplacian loss (cf. Table 1). This results in a linear program which we call linear sparse KFD (LSKFD). This can very efficiently be solved by column generation techniques known from mathematical programming. A final possibility to optimize (4) for the standard KFD problem (i.e. quadratic loss and regularizer) is described in [6]. Here one uses a greedy approximation scheme which iteratively constructs a (sparse) solution to the full problem. Such an approach is straight forward to implement and much faster than solving a quadratic program, provided that the number of non-zero a's necessary to get a good approximation to the full solution is small. 5 Experiments In this section we present some experimental results targeting at (i) showing that the KFD and some of its variants proposed here are capable of producing state of the art results and (ii) comparing the influence of different settings for the regularization P(a) and the loss-function applied to e in kernel based classifiers. The Output Distribution In an initial experiment we compare the output distributions generated by a SVM and the KFD (cf. Figure 2). By maximizing the smallest margin and using linear slack variables for patterns which do not achieve a reasonable margin, the SVM produces a training output sharply peaked around ±1 with Laplacian tails inside the margin area (the inside margin area is the interval [-1, 1], the outside area its complement). Contrary, KFD produces normal distributions which have a small variance along the discriminating direction. Comparing the distributions on the training set to those on the test set, there is almost no difference for KFD. In this sense the direction found on the training data is consistent with the test data. For SVM the output distribution on the test set is significantly different. In the example given in Figure 2 the KFD performed slightly better than SVM (1.5% vs. 1.7%; for both the best parameters found by 5-fold cross validation were used), a fact that is surprising looking only on the training distribution (which is perfectly separated for SVM but has some overlap for KFD). SVM training set SVM test set KFD training set 2 Figure 2: Comparison of output distributions on training and test set for SVM and KFD for optimal parameters on the ringnorm dataset (averaged over 100 different partitions). It is clearly observable, that the training and test set distributions for KFD are almost identical while they are considerable different for SVM. Performance To evaluate the performance of the various KFD approaches on real data sets we performed an extensive comparison to SVMl. The results in Table 2 show the lTbanks to M. Zwitter and M. Soklic for the breast cancer data. All data sets used in the experiments can be obtained via http : //www.f i rst . gmd . de ;- raet s ch/. average test error and the standard deviation of the averages' estimation, over 100 runs with different realizations of the datasets. To estimate the necessary parameters, we ran 5-fold cross validation on the first five realizations of the training sets and took the model parameters to be the median over the five estimates (see [7] for details of the experimental setup). From Table 2 it can be seen that both, SVM and the KFD variants on average perform equally well. In terms of (4) KFD denotes the formulation with quadratic regularizer, SKFD with h -regularizer, and LSKFD with h -regularizer and h loss on e. The comparable performance might be seen as an indicator, that maximizing the smallest margin or the average margin does not make a big difference on the data sets studied. The same seems to be true for using different regularizer and loss functions. Noteworthy is the significantly higher degree of sparsity for KFD. Regression Just to show that the proposed KFD regression works in principle, we conducted a toy experiment on the sine function (cf. Figure 3). In terms of the number of support vectors we obtain similarly sparse results as with RVMs [11], i.e. a much smaller number of non-zero coefficients than in SVM regression. A thorough evaluation is currently being carried out. 1.2 0.8 0.8 0.6 0.6 0.4 0.4 0.2 ~ 0.2 0 ~/T'~ 0 •• ~Iil -0.2 -0.2 -0.4 -0.:.10 -5 0 5 10 -10 -5 0 5 10 Figure 3: I\Iustration of KFD regression. The left panel shows a fit to the noise-free sine function sampled on 100 equally spaced points, the right panel with Gaussian noise of std. dev. 0.2 added. In both cases we used RBF-kemel exp( -llx-yI12 /c)ofwidth c = 4.0 and c = 3.0, respectively. The regularization was C = 0.01 and C = 0.1 (small dots training samples, circled dots SVs). SVM KFD SKFD LSKFD Banana 11.5±0.07 (78%) 10. 8±O. OS 11.2±0.48 (86%) 10.6±O.04 (92%) B.Cancer 26.0±0.47 (42%) 2S.8±O.46 2S.2±O.44 (88%) 2S.8±0.47 (88%) Diabetes 23.5±0.17 (57%) 23.2±O.16 23.l±O.18 (97%) 23.6±0.18 (97%) German 23.6±O.21 (58%) 23.7±O.22 23.6±O.23 (96%) 24.1±0.23 (98%) Hern.t 16.0±O.33 (51 %) 16.l±O.34 16.4±0.31 (88%) 16.0±O.36 (96%) Ringnorm 1.7±0.01 (62%) 1.S±O.01 1.6±O.Ol (85%) l.S±O.01 (94%) ESonar 32.4±O.18 (9%) 33.2±O.17 33.4±0.17 (67%) 34.4±0.23 (99%) Thyroid 4.8±0.22 (79%) 4.2±O.21 4.3±O.18 (88%) 4.7±0.22 (89%) Titanic 22.4±O.10 (10%) 23.2±0.20 22.6±0.17 (8%) 22.S±O.20 (95%) Waveform 9.9±O.O4 (60%) 9.9±O.04 1O.l±O.04 (81%) 1O.2±0.04 (96%) Table 2: Comparison between KFD, sparse KFD (SKFD), sparse KFD with linear loss on e (LSKFD), and SVMs (see text). All experiments were carried out with RBF-kemels exp( -llx-yI12 /c). Best result in bold face, second best in italics. The numbers in brackets denote the fraction of expansions coefficients which were zero. 6 Conclusion and Outlook In this work we showed how KFD can be reformulated as a mathematical programming problem. This allows a better understanding of KFD and interesting extensions: First, a probabilistic interpretation gives new insights about connections to RVM, SVM and regularization properties. Second, using a Laplacian prior, i.e. a it regularizer yields the sparse algorithm SKFD. Third, the more general modeling permits a very natural KFD algorithm for regression. Finally, due to the quadratic programming formulation, we can use tricks known from SVM literature like chunking or active set methods for solving the optimization problem. However the optimal choice of a working set is not completely resolved and is still an issue of ongoing research. In this sense sparse KFD inherits some of the most appealing properties of both, SVM and RVM: a unique, mathematical programming solution from SVM and a higher sparsity together with interpretable outputs from RVM. Our experimental studies show a competitive performance of our new KFD algorithms if compared to SVMs. This indicates that neither the margin nor sparsity nor a specific output distribution alone seem to be responsible for the good performance of kernel-machines. Further theoretical and experimental research is therefore needed to learn more about this interesting question. Our future research will also investigate the role of output distributions and their difference between training and test set. Acknowledgments This work was partially supported by grants of the DFG (JA 379/71,9-1). Thanks to K. Tsuda for helpful comments and discussions. References [1] G. Baudat and F. Anouar. Generalized discriminant analysis using a kernel approach. Neural Computation, 12(10):2385- 2404, 2000. [2] B.E. Boser, LM. Guyon, and Y.N. Vapnik. A training algorithm for optimal margin classifiers. In D. Haussler, editor, Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, pages 144-152, 1992. [3] J.H. Friedman. Regularized discriminant analysis. Journal of the American Statistical Association, 84(405):165- 175, 1989. [4] S. Mika, G. Ratsch, J. Weston, B. Schtilkopf, and K.-R. Mi.iller. Fisher discriminant analysis with kernels. In Y.-H. Hu, J. Larsen, E. Wilson, and S. Douglas, editors, Neural Networks for Signal Processing IX, pages 41-48. IEEE, 1999. [5] S. Mika, G. Ratsch, J. Weston, B. SchOlkopf, AJ. Smola, and K.-R. Millier. Invariant feature extraction and classification in kernel spaces. In S.A. Solla, T.K. Leen, and K.-R. Mi.iller, editors, Advances in Neural Information Processing Systems 12, pages 526- 532. MIT Press, 2000. [6] S. Mika, AJ. Smola, and B. Schtilkopf. An improved training algorithm for kernel fisher discriminants. In Proceedings AlSTATS 2001. Morgan Kaufmann, 2001. to appear. [7] G. Ratsch, T. Onoda, and K.-R. Mi.iller. Soft margins for AdaBoost. Machine Learning, 42(3):287- 320, March 2001. also NeuroCOLT Technical Report NC-TR-1998-021. [8] V. Roth and V. Steinhage. Nonlinear discriminant analysis using kernel functions. In S.A. Solla, T.K. Leen, and K.-R. Mi.iller, editors, Advances in Neural Information Processing Systems 12, pages 568- 574. MIT Press, 2000. [9] B. Schtilkopf, A.J. Smola, and K.-R. Mi.iller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299- 1319, 1998. [10] A J. Smola. Learning with Kernels. PhD thesis, Technische Universitat Berlin, 1998. [11] M.E. Tipping. The relevance vector machine. In S.A. Solla, T.K. Leen, and K.-R. Mi.iller, editors, Advances in Neural Information Processing Systems 12, pages 652-658. MIT Press, 2000. [12] Y.N. Vapnik. The nature of statistical learning theory. Springer Verlag, New York, 1995.
2000
28
1,826
Using Free Energies to Represent Q-values in a Multiagent Reinforcement Learning Task Brian Sallans Department of Computer Science University of Toronto Toronto M5S 2Z9 Canada sallam'@cs,toronto,edu Geoffrey E. Hinton Gatsby Computational Neuroscience Unit University College London London WCIN 3AR u.K. hinton @ gatsby, ucZ. ac, uk Abstract The problem of reinforcement learning in large factored Markov decision processes is explored. The Q-value of a state-action pair is approximated by the free energy of a product of experts network. Network parameters are learned on-line using a modified SARSA algorithm which minimizes the inconsistency of the Q-values of consecutive state-action pairs. Actions are chosen based on the current value estimates by fixing the current state and sampling actions from the network using Gibbs sampling. The algorithm is tested on a co-operative multi-agent task. The product of experts model is found to perform comparably to table-based Q-Iearning for small instances of the task, and continues to perform well when the problem becomes too large for a table-based representation. 1 Introduction Online Reinforcement Learning (RL) algorithms try to find a policy which maximizes the expected time-discounted reward provided by the environment. They do this by performing sample backups to learn a value function over states or state-action pairs [1]. If the decision problem is Markov in the observed states, then the optimal value function over state-action pairs (the Q-function) yields all of the information required to find the optimal policy for the decision problem. For example, when the Q-function is represented as a table, the optimal action for a given state can be found simply by searching the row of the table corresponding to that state. 1.1 Factored Markov Decision Processes In many cases the dimensionality of the problem makes a table representation impractical, so a more compact representation that makes use of the structure inherent in the problem is required. In a co-operative multi-agent system, for example, it is natural to represent both the state and action as sets of variables (one for each agent). We expect that the mapping from the combined states of all the agents to the combined actions of all the agents is not arbitrary: Given an individual agent's state, that agent's action might be largely independent of the other agents' exact states and actions, at least for some regions of the combined state space. We expect that a facto red representation of the Q-value function will be appropriate for two reasons: The original representation of the combined states and combined actions is factored, and the ways in which the optimal actions of one agent are dependent on the states and actions of other agents might be well captured by a small number of "hidden" factors rather than the exponential number required to express arbitrary mappings. 1.2 Actor-Critic Architectures If a non-linear function approximator is used to model the Q-function, then it is difficult and time consuming to extract the policy directly from the Q-function because a non-linear optimization must be solved for each action choice. One solution, called an actor-critic architecture, is to use a separate function approximator to model the policy (i.e. to approximate the non-linear optimization) [2, 3]. This has the advantage of being fast, and allows us to explicitly learn a stochastic policy, which can be advantageous if the underlying problem is not strictly Markov [4]. However, a specific parameterized family of policies must be chosen a priori. Instead we present a method where the Q-value of a state-action pair is represented (up to an additive constant) by the negative free-energy, - F, of the state-action pair under a non-causal graphical model. The graphical model is a product of experts [5] which has two very useful properties: Given a state-action pair, the exact free energy is easily computed, and the derivative of this free energy W.r.t. each parameter of the network is also very simple. The model is trained to minimize the inconsistency between the free-energy of a state-action pair and the discounted free energy of the next state-action pair, taking into account the immediate reinforcement. After training, a good action for a given state can be found by clamping the state and drawing a sample of the action variables using Gibbs sampling [6]. Although finding optimal actions would still be difficult for large problems, selecting an action with a probability that is approximately proportional to exp( - F) can be done with a modest number of iterations of Gibbs sampling. 1.3 Markov Decision Processes We will concentrate on finite, factored, Markov decision processes (factored MDPs), in which each state and action is represented as a set of discrete variables. Formally, a factored MDP consists of the set { {SO:}~=I' {A,8 }:=l, {s~ }~=l' P, Pr }, where: So: is the set of possible values for state variable 0:; A,8 is the set of possible values for action variable (3; s~ is the initial value for state variable 0:; P is a transition distribution P(st+llst, at); and Pr is a reward distribution P(rtlst,at,st+l). A state is an M-tuple and an action is an N-tuple. The goal of solving an MDP is to find a policy, which is a sequence of (possibly stochastic) mappings 7ft : Sl X S2 X ... X SM -+ Al X A2 X ... X AN which maximize the total expected reward received over the course of the task: (Rt) 1r' = (rt + '/'rt+1 + ... + ,/,T-trT) 1r' (1) where,/, is a discount factor and (-) 1r' denotes the expectation taken with respect to policy 7ft. We will focus on the case when the policy is stationary: 7ft is identical for all t. 2 Approximating Q-values with a Product of Experts As the number of state and action variables increases, a table representation quickly becomes intractable. We represent the value of a state and action as the negative free-energy (up to a constant) under a product of experts model (see Figure lea»~. With a product of experts, the probability assigned to a state-action pair, (s, a) is just the (normalized) product of the probabilities assigned to (s, a) under each of the individual a) hidden units state units action units b) jO a 000 000 000 ~ Figure 1: a) The Boltzmann product of experts. The estimated Q-value (up to an additive constant) of a setting of the state and action units is found by holding these units fixed and computing the free energy of the network. Actions are selected by alternating between updating all of the hidden units in parallel and updating all of the action units in parallel, with the state units held constant. b) A multinomial state or action variable is represented by a set of "one-of-n" binary units in which exactly one is on. experts: II~=l Pk(S, al()k) p(s,al()l, ... ,()K) = " II (' 'I()) L.J(sl,al) k Pk S , a k (2) where {()1, ... , ()K} are parameters of the K experts and (s', a') indexes all possible stateaction pairs. In the following, we will assume that there are an equal number of state and action variables (i.e. M = N); and that each state or action variable has the same arity (Va., (3, Isa I = IS,BI and IAa I = IA,Bi). These assumptions are appropriate, for example, when there is one state and action variable for each agent in a multi-agent task. Extension to the general case is straight forward. In the following, (3 will index agents. Many kinds of "experts" could be used while still retaining the useful properties of the PoE. We will focus on the case where each expert is a single binary sigmoid unit because it is particularly suited to the discrete tasks we consider here. Each agent's (multinomial) state or action is represented using a "one-of-N" set of binary units which are constrained so that exactly one of them is on. The product of experts is then a bipartite "Restricted Boltzmann Machine" [5]. We use S,Bi to denote agent (3's ith state and a,Bj to denote its jth action. We will denote the binary latent variables of the "experts" by hk (see Figure l(b)). For a state s = {S,Bi} and an action a = {a,Bj}' the free energy is given by the expected energy given the posterior distribution of the hidden units minus the entropy of this posterior distribution. This is simple to compute because the hidden units are independent in the posterior distribution: F(s,a) K M (lSI IAI ) -t;]; ~(W,BikS,Bihk + b,BiS,B,) + ~(U,Bjka,Bjhk + b,Bja,BJ K K - L bkhk + L hk log hk + (1 - h k) log (1 - h k) - Cp (3) k=l k=l where W(3,k is the weight from the kth expert to binary state variable s(3,; U(3;k is the weight from the kth expert to binary action variable a(3;; bk , b(3, and b(3; are biases; and { M (lSI IAI )} hk = a ]; t; W(3,k S(3,k + t; u(3;ka(3;k + bk (4) is the expected value of each expert given the data where a(x) = 1/1 + e- x denotes the logistic function. CF is an additive constant equal to the log of the partition function. The first two terms of (3) corresponds to an unnormalized negative log-likelihood, and the third to the negative entropy of the distribution over the hidden units given the data. The free energy can be computed tractably because inference is tractable in a product of experts: under the product model each expert is independent of the others given the data. We can efficiently compute the exact free energy of a state and action under the product model, up to an additive constant. The Q-function will be approximated by the negative free-energy (or goodness), without the constant: Q(s, a) :::::: -F(s, a) + CF (5) 2.1 Learning the Parameters The parameters of the model must be adjusted so that the goodness of a state-action under the product model approximates its actual Q-value. This is done with a modified SARSA learning rule designed to minimize the Bellman error [7, 8]. If we consider a delta-rule update where the target for input (st, at) is rt + 'YQ(st+!, a t+!), then (for example) the update for W(3,k is given by: !1W(3,k ex: (rt+'YQ(st+l,at+!)-Q(st,at)) 8Q(st,at) 8 W(3ik (6) (7) The other weights and biases are updated similarly. Although there is no proof of convergence for this learning rule, it works well in practice even though it ignores the effect of changes in W(3ik on Q(st+l, a t+!). 2.2 Sampling Actions Given a trained network and the current state st, we need to generate actions according to their goodness. We would like to select actions according to a Boltzmann exploration scheme in which the probability of selecting an action is proportional to eQ IT. This selection scheme has the desirable property that it optimizes the trade-off between the expected payoff, Q, and the entropy of the selection distribution, where T is the relative importance of exploration versus exploitation. Fortunately, the additive constant, CF, does not need to be known in order to select actions in this way. It is sufficient to do alternating Gibbs sampling. We start with an arbitrary initial action represented on the action units. Holding the state units fixed we update all of the hidden units in parallel so that we get a sample from the posterior distribution over the hidden units given the state and the action. Then we update all of the action units in parallel so that we get a sample from the posterior distribution over actions given the states of the hidden units. When updating the states of the action units, we use a "softmax" to enforce the one-of-N constraint within a set of binary units that represent mutually exclusive actions of the same agent. When the alternating Gibbs sampling reaches equilibrium it draws unbiased samples of actions according to their Q-value. For the networks we used, 50 Gibbs iterations appeared to be sufficient to come close to the equilibrium distribution. 3 Experimental Results To test the algorithm we introduce a co-operative multi-agent task in which there are offensive players trying to reach an end-zone, and defensive players trying to block them (see Figure 2). end-zone blockers ~ ~ \ C) '0 agents [C) Figure 2: An example of the "blocker" task. Agents must get past the blockers to the end-zone. The blockers are preprogrammed with a strategy to stop them, but if they co-operate the blockers cannot stop them all simultaneously. The task is co-operative: As long as one agent reaches the end-zone, the "team" is rewarded. The team receives a reward of + 1 when an agent reaches the end-zone, and a reward of -1 otherwise. The blockers are pre-programmed with a fixed blocking strategy. Each agent occupies one square on the grid, and each blocker occupies three horizontally adjacent squares. An agent cannot move into a square occupied by a blocker or another agent. The task has non-wrap-around edge conditions on the east, west and south sides of the field, and the blockers and agents can move north, south, east or west. A product of experts (PoE) network with 4 hidden units was trained on a 5 x 4 blocker task with two agents and one blocker. The combined state consisted of three position variables (two agents and one blocker) which could take on integer values {I, ... , 20}. The combined action consisted of two action variables taking on values from {I, ... ,4}. The network was run twice, once for 60 000 combined actions and once for 400 000 combined actions, with a learning rate going from 0.1 to 0.01 linearly and temperature going from 1.0 to 0.01 exponentially over the course of training. Each trial was terminated after either the end-zone was reached, or 20 combined actions were taken, whichever occurred first. Each trial was initialized with the blocker placed randomly in the top row and the agents placed randomly in the bottom row. The same learning rate and temperature schedule were used to train a Q-Iearner with a table containing 128,000 elements (203 x 42), except that the Q-Iearner was allowed to train for 1 million combined actions. After training each policy was run for 10,000 steps, and all rewards were totaled. The two algorithms were also compared to a hand-coded policy, where the agents first move to opposite sides of the field and then move to the end-zone. In this case, all of the algorithms performed comparably, and the POE network performing well even for a short training time. A PoE network with 16 hidden units was trained on a 4 x 7 blockers task with three agents and two blockers. Again, the input consisted of position variables for each blocker and agent, and and action variables for each agent. The network was trained for 400 000 combined actions, with the a learning rate from 0.01 to 0.001 and the same temperature schedule as the previous task. Each trial was terminated after either the end-zone was reached, or 40 steps were taken, whichever occurred first. After training, the resultant policy was run for 10,000 steps and the rewards received were totaled. As the table representation would have over a billion elements (285 x 43), a table based Q-Iearner could not be trained for comparison. The hand-coded policy moved agents 1, 2 and 3 to the left, middle and right column respectively, and then moved all agents towards the end-zone. The PoE performed comparably to this hand-coded policy. The results for all experiments are summarized in Table 1. Table 1: Experimental Results Algorithm Random policy (5 x 4,2 agents, 1 blocker) hand-coded (5 x 4, 2 agents, 1 blocker) Q-Ieaming (5 x 4,2 agents, 1 blocker, 1000K steps) PoE (5 x 4, 2 agents, 1 blocker, 60K steps) PoE (5 x 4, 2 agents, 1 blocker, 400K steps) Random policy (4 x 7,3 agents, 2 blockers) hand-coded (4 x 7,3 agents, 2 blockers) PoE (4 x 7,3 agents, 2 blockers, 400K steps) 4 Discussion Reward -9986 -6782 -6904 -7303 -6738 -9486 -7074 -7631 Each hidden unit in the product model implements a probabilistic constraint that captures one aspect of the relationship between combined states and combined actions in a good policy. In practice the hidden units tend to represent particular strategies that are relevant in particular parts of the combined state space. This suggests that the hidden units could be used for hierarchical or temporal learning. A reinforcement learner could, for example, learn the dynamics between hidden unit values (useful for POMDPs) and the rewards associated with hidden unit activations. Because the PoE network implicitly represents a joint probability distribution over stateaction pairs, it can be queried in ways that are not normally possible for an actor network. Given any subset of state and action variables, the remainder can be sampled from the network using Gibbs sampling. This makes it easy to answer questions of the form: "How should agent 3 behave given fixed actions for agents 1 and 2?" or "I can see some of the state variables but not others. What values would I most like to see for the others?". Further, because there is an efficient unsupervised learning algorithm for PoE networks, an agent could improve its policy by watching another agent's actions and making them more probable under its own model. There are a number of related works, both in the fields of reinforcement learning and unsupervised learning. The SARSA algorithm is from [7, 8]. A delta-rule update similar to ours was explored by [9] for POMDPs and Q-Iearning. Factored MDPs and function approximators have a long history in the adaptive control and RL literature (see for example [10]). Our method is also closely related to actor-critic methods [2,3]. Normally with an actorcritic method, the actor network can be viewed as a biased scheme for selecting actions according to the value assigned by the critic. The selection is biased by the choice of parameterization. Our method of action selection is unbiased (if the Markov chain is allowed to converge). Further, the resultant policy can potentially be much more complicated than a typical parameterized actor network would allow. This is exactly the tradeoff explored in the graphical models literature between the use of Monte Carlo inference [11] and variational approximations [12]. Our algorithm is also related to probability matching [13], in which good actions are made more probable under the model, and the temperature at which the probability is computed is slowly reduced over time in order to move from exploration to exploitation and avoid local minima. Unlike our algorithm, the probability matching algorithm used a parameterized distribution which was maximized using gradient descent, and it did not address temporal credit assignment. 5 Conclusions We have shown that a product of experts network can be used to learn the values of stateaction pairs (including temporal credit assignment) when both the states and actions have a factored representation. An unbiased sample of actions can then be recovered with Gibbs sampling and 50 iterations appear to be sufficient. The network performs as well as a tablebased Q-Iearner for small tasks, and continues to perform well when the task becomes too large for a table-based representation. Acknowledgments We thank Peter Dayan, Zoubin Ghahramani and Andy Brown for helpful discussions. This research was funded by NSERC Canada and the Gatsby Charitable Foundation. References [1] R.S Sutton and A.G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998. [2] A. G. Barto, R. S. Sutton, and C. W. Anderson. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Transactions on Systems, Man and Cybernetics, 13:835846, 1983. [3] R. S. Sutton. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Proc. International Conference on Machine Learning, 1990. [4] Tommi Jaakkola, Satinder P. Singh, and Michael 1. Jordan. Reinforcement learning algorithm for partially observable Markov decision problems. In Gerald Tesauro, David S. Touretzky, and Todd K. Leen, editors, Advances in Neural Information Processing Systems, volume 7, pages 345-352. The MIT Press, Cambridge, 1995. [5] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Technical Report GCNU TR 2000-004, Gatsby Computational Neuroscience Unit, UCL, 2000. [6] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721- 741 , 1984. [7] G.A. Rummery and M. Niranjan. On-line Q-learning using connectionist systems. Technical Report CUEDIF-INFENGfTR 166, Engineering Department, Cambridge University, 1994. [8] R.S. Sutton. Generalization in reinforcement learning: Successful examples using sparse coarse coding. In Touretzky et al. [14], pages 1038- 1044. [9] M.L. Littman, A.R. Cassandra, and L.P. Kaelbling. Learning policies for partially observable environments: Scaling up. In Proc. International Conference on Machine Learning, 1995. [10] D.P. Bertsekas and J.N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, Belmont, MA, 1996. [11] R. M. Neal. Connectionist learning of belief networks. Artificial Intelligence, 56:71- 113, 1992. [12] T. S. Jaakkola. Variational Methods for Inference and Estimation in Graphical Models. Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, 1997. Ph.D. thesis. [13] Philip N. Sabes and Michael 1. Jordan. Reinforcement learning by probability matching. In Touretzky et al. [14], pages 1080-1086. [14] David S. Touretzky, Michael C. Mozer, and Michael E. Hasselmo, editors. Advances in Neural Information Processing Systems, volume 8. The MIT Press, Cambridge, 1996.
2000
29
1,827
Who Does What? A Novel Algorithm to Determine Function Localization Ranit Aharonov-Barki Interdisciplinary Center for Neural Computation The Hebrew University, Jerusalem 91904, Israel ranit@alice.nc.huji.ac.il Isaac Meilijson and Eytan Ruppin School of Mathematical Sciences Tel-Aviv University, Tel-Aviv, Israel isaco@math.tau.ac.il, ruppin@math.tau.ac.il Abstract We introduce a novel algorithm, termed PPA (Performance Prediction Algorithm), that quantitatively measures the contributions of elements of a neural system to the tasks it performs. The algorithm identifies the neurons or areas which participate in a cognitive or behavioral task, given data about performance decrease in a small set of lesions. It also allows the accurate prediction of performances due to multi-element lesions. The effectiveness of the new algorithm is demonstrated in two models of recurrent neural networks with complex interactions among the elements. The algorithm is scalable and applicable to the analysis of large neural networks. Given the recent advances in reversible inactivation techniques, it has the potential to significantly contribute to the understanding of the organization of biological nervous systems, and to shed light on the long-lasting debate about local versus distributed computation in the brain. 1 Introduction Even simple nervous systems are capable of performing multiple and unrelated tasks, often in parallel. Each task recruits some elements of the system (be it single neurons or cortical areas), and often the same element participates in several tasks. This poses a difficult challenge when one attempts to identify the roles of the network elements, and to assess their contributions to the different tasks. Assessing the importance of single neurons or cortical areas to specific tasks is usually achieved either by assessing the deficit in performance after a lesion of a specific area, or by recording the activity during behavior, assuming that areas which deviate from baseline activity are more important for the task performed. These classical methods suffer from two fundamental flaws: First, they do not take into account the probable case that there are complex interactions among elements in the system. E.g., if two neurons have a high degree of redundancy, lesioning of either one alone will not reveal its influence. Second, they are qualitative measures, lacking quantitative predictions. Moreover, the very nature of the contribution of a neural element is quite elusive and ill defined. In this paper we propose both a rigorous, operative definition for the neuron's contribution and a novel algorithm to measure it. Identifying the contributions of elements of a system to varying tasks is often used as a basis for claims concerning the degree of the distribution of computation in that system (e.g. [1]). The distributed representation approach hypothesizes that computation emerges from the interaction between many simple elements, and is supported by evidence that many elements are important in a given task [2, 3, 4]. The local representation hypothesis suggests that activity in single neurons represents specific concepts (the grandmother cell notion) or performs specific computations (see [5]). This question of distributed versus localized computation in nervous systems is fundamental and has attracted ample attention. However there seems to be a lack of a unifying definition for these terms [5]. The ability of the new algorithm suggested here, to quantify the contribution of elements to tasks, allows us to deduce both the distribution of the different tasks in the network and the degree of specialization of each neuron. We applied the Performance Prediction Algorithm (PPA) to two models of recurrent neural networks: The first model is a network hand-crafted to exhibit redundancy, feedback and modulatory effects. The second consists of evolved neurocontrollers for behaving autonomous agents [6]. In both cases the algorithm results in measures which are highly consistent with what is qualitatively known a-priori about the models. The fact that these are recurrent networks, and not simple feed-forward ones, suggests that the algorithm can be used in many classes of neural systems which pose a difficult challenge for existing analysis tools. Moreover, the proposed algorithm is scalable and applicable to the analysis of large neural networks. It can thus make a major contribution to studying the organization of tasks in biological nervous systems as well as to the long-debated issue of local versus distributed computation in the brain. 2 Indices of Contribution, Localization and Specialization 2.1 The Contribution Matrix Assume a network (either natural or artificial) of N neurons performing a set of P different functional tasks. For any given task, we would like to find the contribution vector c = (Cl' ... , CN), where Ci is the contribution of neuron i to the task in question. We suggest a rigorous and operative definition for this contribution vector, and propose an algorithm for its computation. Suppose a set of neurons in the network is lesioned and the network then performs the specified task. The result of this experiment is described by the pair < m, Pm > where m is an incidence vector of length N, such that m(i) = 0 if neuron i was lesioned and 1 if it was intact. Pm is the peiformance of the network divided by the baseline case of a fully intact network. Let the pair < f, C >, where f is a smooth monotone non-decreasing l function and C a normalized column vector such that E~l ICil = 1, be the pair which minimizes the following error function 1 ~ 2 E = 2N L)f(m. c) - Pm] . (1) {m} lIt is assumed that as more important elements are lesioned (m . c decreases), the performance (Pm) decreases, and hence the postulated monotonicity of f. This c will be taken as the contribution vector for the task tested, and the corresponding f will be called its adjoint performance prediction function. Given a configuration m of lesioned and intact neurons, the predicted performance of the network is the sum of the contribution values of the intact neurons (m . c), passed through the performance prediction function f. The contribution vector c accompanied by f is optimal in the sense that this predicted value minimizes the Mean Square Error relative to the real performance, over all possible lesion configurations. The computation of the contribution vectors is done separately for each task, using some small subset of all the 2N possible lesioning configurations. The training error Et is defined as in equation 1 but only averaging on the configurations present in the training set. The Performance Prediction Algorithm (PPA): • Step 1: Choose an initial normalized contribution vector c for the task. If there is no bias for a special initial choice, set all entries to random values. Repeat steps 2 and 3 until the error Et converges or a maximal number of steps has been reached: • Step 2: Compute f. Given the current c, perform isotonic regression [7] on the pairs < m . c,Pm > in the training set. Use a smoothing spline [8] on the result of the regression to obtain the new f . • Step 3: Compute c. Using the current f compute new c values by training a perceptron with input m, weights c and transfer function f. The output of the perceptron is exactly f(m . c), and the target output is Pm. Hence training the perceptron results in finding a new vector c, that given the current function f, minimizes the error Et on the training set. Finally re-normalize c. The output of the algorithm is thus a contribution value for every neuron, accompanied by a function, such that given any configuration of lesioned neurons, one can predict with high confidence the performance of the damaged network. Thus, the algorithm achieves two important goals: a) It identifies automatically the neurons or areas which participate in a cognitive or behavioral task. b) The function f predicts the result of multiple lesions, allowing for non linear combinations of the effects of single lesions 2 . The application of the PPA to all tasks defines a contribution matrix C, whose kth column (k = L.P) is the contribution vector computed using the above algorithm for task k, i.e. Cik is the contribution of neuron i to task k. 2.2 Localization and Specialization Introducing the contribution matrix allows us to approach issues relating to the distribution of computation in a network in a quantitative manner. Here we suggest quantitative measures for localization of function and specialization of neurons. If a task is completely distributed in the network, the contributions of all neurons to that task should be identical (full equipotentiality [2]). Thus, we define the localization Lk of task k as a deviation from equipotentiality. Formally, Lk is the standard deviation of column k of the contribution matrix divided by the maximal possible standard deviation. L _ std(C*k) k J(N _ 1)jN2 (2) 2Tbe computation of t, involving a simple perceptron-based function approximation, implies the immediate applicability of the PPA for large networks, given weB-behaved performance prediction functions. (a) (b) Neuron 7 Neuron 8 ":0 ":0 01230123 Figure 1: Hand-crafted neural network: a) Architecture of the network. Solid lines are weights, all of strength 1. Dashed lines indicate modulatory effects. Neurons 1 through 6 are spontaneously active (activity equals 1) under normal conditions. The performance of the network is taken to be the activity of neuron 10. b) The activation functions of the nonspontaneous neurons. The x axis is the input field and the y axis is the resulting activity of the neuron. Neuron 8 has two activation functions. If both neurons 2 and 3 are switched on they activate a modulating effect on neuron 8 which switches its activation function from the inactive case to the active case. Note that Lk is in the range [0,1] where Lk = ° indicates full distribution and Lk = 1 indicates localization of the task to one neuron alone. The degree of localization of function in the whole network, L, is the simple average of Lk over all tasks. Similarly, if neuron i is highly specialized for a certain task, Ci* will deviate strongly from a uniform distribution, and thus we define Si, the specialization of neuron i as (3) 3 Results We tested the proposed index on two types of recurrent networks. We chose to study recurrent networks because they pose an especially difficult challenge, as the output units also participate in the computation, and in general complex interactions among elements may arise3 . We begin with a hand-crafted example containing redundancy, feedback and modulation, and continue with networks that emerge from an evolutionary process. The evolved networks are not hand-crafted but rather their structure emerges as an outcome of the selection pressure to successfully perform the tasks defined. Thus, we have no prior knowledge about their structure, yet they are tractable models to investigate. 3.1 Hand-Crafted Example Figure 1 depicts a neural network we designed to include potential pitfalls for analysis procedures aimed at identifying important neurons of the system (see details in the caption). Figure 2(a) shows the contribution values computed by three methods applied to this network. The first estimation was computed as the correlation between the activity of the 3In order to single out the role of output units in the computation, lesioning was performed by decoupling their activity from the rest of the network and not by knocking them out completely. 0.3 0.25 0.2 0.15 0.1 (a) _ ActiVity _ Smgle leSions o PPA (b) ~:: Correlation: 0.9978 ,-,,-,'-"'" 0.7 ~ ' ~6 " 0.5 " If" ~: /,/'({ go'! "-"") 0.1 '* ", o :f;" x o 0.2 0.4 0.6 0.8 Actual Performance Figure 2: Results of the PPA: a) Contribution values obtained using three methods: The correlation of activity to performance, single neuron lesions, and the PPA. b) Predicted versus actual performance using c and its adjoint performance prediction function f obtained by the PPA. Insert: The shape of f. neuron and the performance of the network4 . To allow for comparison between methods these values were normalized to give a sum of 1. The second estimation was computed as the decrease in performance due to lesioning of single neurons. Finally, we used the PPA, training on a set of 64 examples. Note that as expected the activity correlation method assigns a high contribution value to neuron 9, even though it actually has no significance in determining the performance. Single lesions fail to detect the significance of neurons involved in redundant interactions (neurons 4 - 6). The PPA successfully identifies the underlying importance of all neurons in the network, even the subtle significance of the feedback from neuron 10. We used a small training set (64 out of 210 configurations) containing lesions of either small (up to 20% chance for each neuron to be lesioned) or large (more than 90% chance of lesioning) degree. Convergence was achieved after 10 iterations. As opposed to the two other methods, the PPA not only identifies and quantifies the significance of elements in the network, but also allows for the prediction of performances from multi-element lesions, even if they were absent from the training set. The predicted performance following a given configuration of lesioned neurons is given by f(m . c) as explained in section 2.1. Figure 2(b) depicts the predicted versus actual performances on a test set containing 230 configurations of varying degrees (0 - 100% chance of lesioning). The correlation between the predicted value and the actual one is 0.9978, corresponding to a mean prediction error of only 0.0007. In principle, the other methods do not give the possibility to predict the performance in any straightforward way, as is evident from the non-linear form of the performance prediction error (see insert of figure 2(b». The shape of the performance prediction function depends on the organization of the network, and can vary widely between different models (results not shown here). 3.2 Evolved Neurocontrollers Using evolutionary simulations we developed autonomous agents controlled by fully recurrent artificial neural networks. High performance levels were attained by agents performing simple life-like tasks of foraging and navigation. Using various analysis tools we found a common structure of a command neuron switching the dynamics of the network between 4Neuron 10 was omitted in this method of analysis since it is by definition in full correlation with the performance. radically different behavioral modes [6]. Although the command neuron mechanism was a robust phenomenon, the evolved networks did differ in the role other neurons performed. When only limited sensory information was available, the command neuron relied on feedback from the motor units. In other cases no such feedback was needed, but other neurons performed some auxiliary computation on the sensory input. We applied the PPA to the evolved neurocontrollers in order to test its capabilities in a system on which we have previously obtained qualitative understanding, yet is still relatively complex. Figure 3 depicts the contribution values of the neurons of three successful evolved neurocontrollers obtained using the PPA. Figure 3(a) corresponds to a neurocontroller of an agent equipped with a position sensor (see [6] for details), which does not require any feedback from the motor units. As can be seen these motor units indeed receive contribution values of near zero. Figures 3(b) and 3(c) correspond to neurocontrollers who strongly relied on motor feedback for their memory mechanism to function properly. The algorithm easily identifies their significance. In all three cases the command neuron receives high values as expected. The performance prediction capabilities are extremely high, giving correlations of 0.9999, 0.9922 and 0.9967 for the three neurocontrollers, on a test set containing 100 lesion configurations of mixed degrees (0 - 100% chance of lesioning). We also obtained the degree of localization of each network, as explained in section 2.2. The values are: 0.56, 0.35 and 0.47 for the networks depicted in figures 3(a) 3(b) and 3(c) respectively. These values are in good agreement with the qualitative descriptions of the networks we have obtained using classical neuroscience tools [6]. 045 D4 ~D35 " ~ 03 i 025 ~ 02 8015 (a) CN D1 005 MotorUmts I. o 1 ~3 4 5 slll!l B Neuron Number 9 10 (b) CN 1 2 :3 4 5 6 7 8 9 10 Neuron Number 045 04 Motor Cl 035 Units ~ (c) g 03 eN ~025 i 02 ° 015 D1 DDS .. ,_ .. -. 1 3 5 7 9 11 1315 17 19 21 NelronNumber Figure 3: Contribution values of neurons in three evolved neurocontrollers: Neurons 1-4 are motor neurons. eN is the command neuron that emerged spontaneously in all evolutionary runs. 4 Discussion We have introduced a novel algorithm termed PPA (Performance Prediction Algorithm) to measure the contribution of neurons to the tasks that a neural network performs. These contributions allowed us to quantitatively define an index of the degree of localization of function in the network, as well as for task-specialization of the neurons. The algorithm uses data from performance measures of the network when different sets of neurons are lesioned. Theoretically, pathological cases can be devised where very large training sets are needed for correct estimation. However it is expected that many cases are well-behaved and will demonstrate behaviors similar to the models we have used as test beds, i.e. that a relatively small subset suffices as a training set. It is predicted that larger training sets containing different degrees of damage will be needed to achieve good results for systems with higher redundancy and complex interactions. We are currently working on studying the nature of the training set needed to achieve satisfying results, as this in itself may reveal information on the types of interactions between elements in the system. We have applied the algorithm to two types of artificial recurrent neural networks, and demonstrated that it results in agreement with our qualitative a-priori notions and with qualitative classical analysis methods. We have shown that estimation of the importance of system elements using simple activity measures and single lesions, may be misleading. The new PPA is more robust as it takes into account interactions of higher degrees. Moreover it serves as a powerful tool for predicting damage caused by multiple lesions, a feat that is difficult even when one can accurately estimate the contributions of single elements. The shape of the performance prediction function itself may also reveal important features of the organization of the network, e.g. its robustness to neuronal death. The prediction capabilities of the algorithm can be used for regularization of recurrent networks. Regularization in feed-forward networks has been shown to improve performance significantly, and algorithms have been suggested for effective pruning [9]. However, networks with feedback (e.g. Elman-like networks) pose a difficult problem, as it is hard to determine which elements should be pruned. As the PPA can be applied on the level of single synapses as well as single neurons, it suggests a natural algorithm for effective regularization, pruning the elements by order of their contribution values. Recently a large variety of reversible inactivation techniques (e.g. cooling) have emerged in neuroscience. These methods alleviate many of the problematic aspects of the classical lesion technique (ablation), enabling the acquisition of reliable data from multiple lesions of different configurations (for a review see [10]). It is most likely that a plethora of data will accumulate in the near future. The sensible integration of such data will require quantitative methods, to complement the available qualitative ones. The promising results achieved with artificial networks and the potential scalability of the PPA lead us to believe that it will prove extremely useful in obtaining insights into the organization of natural nervous systems. Acknowledgments We greatly acknowledge the valuable contributions made by Ehud Lehrer, Hanoch Gutfreund and Tuvik Beker . References [1] J. Wu, L. B. Cohen, and C. X. Falk. Neuronal activity during different behaviors in aplysia: A distributed organiation? Science, 263:820--822, 1994. [2] K. S. Lashley. Brain Mechanisms in Intelligence. University of Chicago Press, Chicago, 1929. [3] J. L. McClelland, elhart D.E. Ru, and the PDP Research Group. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 2: Psychological and Biological Models. MIT Press, Massachusetts, 1986. [4] J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, USA, 79:2554-2558, 1982. [5] S. Thorpe. Localized versus distributed representations. In M. A. Arbib, editor, Handbook of Brain Theory and Neural Networks. MIT Press, Massachusetts, 1995. [6] R. Aharonov-Barki, T. Beker, and E. Ruppin. Emergence of memory-driven command neurons in evolved artificial agents. Neural Computation, In print. [7] R. Barlow, D. Bartholemew, J. Bremner, and H. Brunk. Statistical Inference Under Order Restrictions. John Wiley, New York, 1972. [8] G. D. Knott. Interpolating cubic splines. In National Science Foundation J. C. Chemiavsky, editor, Progress in Computer Science and Applied Logic. Birkhauser, 2000. [9] R. Reed. Prunning algorithms - a survey. IEEE Trans. on Neural Networks, 4(5):740--747, 1993. [10] S. G. Lomber. The advantages and limitations of permanent or reversible deactivation techniques in the assessment of neural function. 1. of Neuroscience Methods, 86: 109- 117, 1999.
2000
3
1,828
Dopamine Bonuses Sham Kakade Peter Dayan Gatsby Computational Neuroscience Unit 17 Queen Square, London, England, WC1N 3AR. sham@gat sby.ucl. ac . uk daya n@gat sby.uc l. ac .uk Abstract Substantial data support a temporal difference (TO) model of dopamine (OA) neuron activity in which the cells provide a global error signal for reinforcement learning. However, in certain circumstances, OA activity seems anomalous under the TO model, responding to non-rewarding stimuli. We address these anomalies by suggesting that OA cells multiplex information about reward bonuses, including Sutton's exploration bonuses and Ng et al's non-distorting shaping bonuses. We interpret this additional role for OA in terms of the unconditional attentional and psychomotor effects of dopamine, having the computational role of guiding exploration. 1 Introduction Much evidence suggests that dopamine cells in the primate midbrain play an important role in reward and action learning. Electrophysiological studies support a theory that OA cells signal a global prediction error for summed future reward in appetitive conditioning tasks (Montague et al, 1996; Schultz et al, 1997), in the form of a temporal difference prediction error term. This term can simultaneously be used to train predictions (in the model, the projections of the OA cells in the ventral tegmental area to the limbic system and the ventral striatum) and to train actions (the projections of OA cells in the substantia nigra to the dorsal striatum and motor and premotor cortex). Appetitive prediction learning is associated with classical conditioning, the task of learning which stimuli are associated with reward; appetitive action learning is associated with instrumental conditioning, the task of learning actions that result in reward delivery. The computational role of dopamine in reward learning is controversial for two main reasons (Ikemoto & Panksepp, 1999; Redgrave et al, 1999). First, stimuli that are not associated with reward prediction are known to activate the dopamine system persistently, including in particular stimuli that are novel and salient, or that physically resemble other stimuli that do predict reward (Schultz, 1998). Second, dopamine release is associated with a set of motor effects, such as species- and stimulus-specific approach behaviors, that seem either irrelevant or detrimental to the delivery of reward. We call these unconditional effects. In this paper, we study this apparently anomalous activation of the OA system, suggesting that it multiplexes information about bonuses, potentially including exploration bonuses (Sutton, 1990; Dayan & Sejnowski, 1996) and shaping bonuses (Ng et al, 1999), on top of reward prediction errors. These responses are associated with the unconditional effects of OA, and are part of an attentional system. A .' \" ",L,.I\ ,,11,1 +'lj,.'-4'I~~II/"""""" + light 111.1.1 ... I~ .. ~ "d~~L.II. ".11 11 11,,1, + ·4ooms raward 7oom. D "" •• ' , 1 .... + ,i..IeI ............... ~ ~f . • ·320ms door · 480ms B "'1"'oApIL"", •• ~alijwb""'Jo.''''',jJ,'Ll.u. r ' i . ~ IIlf!t ro"Nd ~j!tuw.,lJ!"" 44I,L.; ...... 1.1Wv.. c~e + dJor + AlMW,J \J.I.a M • 1+*''''''''''' 300ms cLe . dJor . bins Figure 1: Activity of individual DA neurons though substantial data suggest the homogeneous character of these responses (Schultz, 1998). See text for description. The latency and duration of the DA activation is about lOOms. The depression has duration of about 200 ms. The baseline spike rate is about 2-4 Hz. Adapted from Schultz et al (1990, 1992, & 1993) and Jacobs et al (1997). 2 DA Activity Figure 1 shows three different types of dopamine responses that have been observed by Schultz et al and Jacobs et al. Figures 1A;B show the response to a conditioned stimulus that becomes predictive of reward (CS+). For this, in early trials (figure 1A), there is no, or only a weak response to the CS+, but a strong response just after the time of delivery of the reward. In later trials (figure 18), after learning is complete (but before overtraining), the DA cells are activated in response to the stimulus, and fire at background rates to the reward. Indeed, if the reward is omitted, there is depression of DA activity at just the time during early trials that it used to excite the cells. These are the key data for which the temporal difference model accounts. Under the model, the cells report the temporal difference (TD) error for reward, ie the difference in amount of reward that is delivered and the amount that is expected. Let r(t) be the amount of reward received at time t and v(t) be the prediction of the sum total (undiscounted) reward to be delivered in a trial after time t, or: v(t) '" L r(T + t) . r~O The TD component to the dopamine activity is the prediction error: c5(t) = r(t) + v(t + 1) - v(t) (1) (2) which uses r(t) + v(t + 1) as an estimate of :Er>or(T + t), so that the TD error is an estimate of :Er>or(T + t) - v(t). Provided that the information about state includes information ab-out how much time has elapsed since the CS+ was presented (which must be available because of the precisely timed nature of the inhibition at the time of reward, if the expected reward is not presented), this model accounts well for the results in figure 1A. The general framework of reinforcement learning methods for Markov decision problems (MDPs) extends these results to the case of control. An MDP consists of states, actions, transition probabilities between states under the chosen action, and the associated rewards with these transitions. The goal of the subject solving a MOP is to find a policy (a choice of actions in each state) so as to optimize the sum total reward it receives. The TO error 8(t) can be used to learn optimal policies by implementing a form of policy iteration, which is an optimal control teclmique that is standard in engineering (Sutton & Barto, 1998; Bertsekas & Tsitsiklis, 1996). Figures lC;O show that reporting a prediction error for reward does not exhaust the behavioral repertoire of the OA cells. Figure lC shows responses to salient, novel, stimuli. The dominant effect is that there is a phasic activation of dopamine cells followed by a phasic inhibition, both locked to the stimulus. These novelty responses decrease over trials, but quite slowly for very salient stimuli (Schultz, 1998). In some cases, particularly in early trials of appetitive learning (figure lA top), there seems to be little or no phasic inhibition of the cells following the activation. Figure 10 shows what happens when a stimulus (door -) that resembles a reward-predicting stimulus (door +) is presented without reinforcement. Again a phasic increase over baseline followed by a depression is seen (lower 10). However, unlike the case in figure 1 B, there is no persistent reward prediction, since if a reward is subsequently delivered (unexpectedly), the cells become active (not shown) (Schultz, 1998). 3 Multiplexing and reward distortion The most critical issue is whether it is possible to reconcile the behavior of the OA cells seen in figures lC;O with the putative computational role of OA in terms of reporting prediction error for reward. Intuitively, these apparently anomalous responses are benign, that is they do not interfere with the end point of normal reward learning, provided that they sum to zero over a trial. To see this, consider what happens once learning is complete. If we sum the prediction error terms from equation 2, starting from the time of the stimulus onset at t = I, we get L:t~l 8(t) = v(tend) - v(l) + L:t~l r(t) where tend is the time at the end of the trial. Assuming that v(tend)=O and v(l) =0, ie that the monkey confines its reward predictions to within a trial, we can see that any additional influences on 8(t) that sum to 0 preserve predicted sum future rewards. From figure I, this seems true of the majority of the extra responses, ie anomalous activation is canceled by anomalous inhibition, though it is not true of the uncancelled OA responses shown in figure lA (upper). Altogether, OA activity can still be used to learn predictions and choose actions - although it should not strictly be referred to solely in terms of prediction error for reward. Apart from the issue of anomalous activation that is not canceled (upper figure lA), this leaves open two key questions: what drives the extra OA responses; and what effects do they have. We offer a set of possible interpretations (mostly associated with bonuses) that it is hard to decide between on the basis of current data. 4 Novelty and Bonuses Three very different sorts of bonuses have been considered in reinforcement learning, novelty, shaping and exploration bonuses. The presence of the first two of these is suggested by the responses in figure 1. Bonuses modify the reward signals and so change the course of learning. They are mostly used to guide exploration of the world, and are typically heuristic ways of addressing the computationally intractable exploration-exploitation dilemma. O:b ~ ~ b ~ 000 -0.5 O:~ b b b ~ 000 -0.5 o 10 20 0 10 20 0 10 20 0 10 20 0 10 20 time time time trial trial Figure 2: Activity of the DA system given novelty bonuses. The plots show different aspects of the TD error 8 as a function of time t within a trial (first three plots in each row) or as a function of number T of trials (last two). Upper) A novelty signal was applied for just the first timesteps of the stimulus and decayed hyperbolically with trial number as liT. Lower) A novelty signal was applied for the first two timesteps of the stimulus and now decayed exponentially as e-· 3T to demonstrate that the precise form of decay is irrelevant. Trial numbers and times are shown in the plots. The learning rate was E = 0.3. We first consider a novelty bonus, which we take as a model for uncancelled anomalous activity. A novelty bonus is a value that is added to states or state-action pairs associated with their unfamiliarity novelty is made intrinsically rewarding. This is computationally reasonable, at least in moderation, and indeed it has become standard practice in reinforcement learning to use optimistic initial values for states to encourage systems to plan to get to novel or unfamiliar states. In TD terms, this is like replacing the true environmental reward r(t) at time t with r(t) --t r(t) + n(x(t), T) where x(t) is the state at time t and n(x(t), T) is the novelty of this state in trial T (an index we generally suppress). The effect on the TD error is then c5(t) = r(t) + n(x(t), T) + v(t + 1) - v(t) (3) The upper plots in figure 2 show the effect of including such an exploration bonus, in a case in which just the first timestep of a new stimulus in any given trial are awarded a novelty signal which decays hyperbolically to 0 as the stimulus becomes more familiar. Here, a novel stimulus is presented for a 25 trials without there being any reward consequences. The effect is just a positive signal which decreases over time. Learning has no effect on this, since the stimulus cannot predict away a novelty signal that lasts only a single timestep. The lower plots in figure 2 show that it is possible to get partial apparent cancellation through learning, if the novelty signal is applied for the first two timesteps of a stimulus (for instance if the novelty signal is calculated relatively slowly). In this case, the initial effect is just a positive signal (leftmost graph), the effect of TD learning gives it a negative transient after a few trials (second plot), and then, as the novelty signal decays to 0, the effect goes away (third plot). The righthand plots show how c5(t) behaves across trials. If there was no learning, then there would be no negative transient. The depression of the DA signal comes from the decay of the novelty bonuses. Novelty bonuses are true bonuses in the sense that they actually distort the reward function. In particular, this means that we would not expect the sum of the extra TD error terms to be 0 across a trial. This property makes them useful, for instance, in actually distorting the optimal policy in Markov decision problems to ensure that exploration is plmmed and executed in favor of exploitation. However, they can be dangerous for exactly the same reason - and there are reports of them leading to incorrect behavior, making agents search too much. '[tj tj d D ~ 000 -1 '[tj S 8 LJ Ej 000 -1 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 time time time trial trial Figure 3: Activity of the DA system given shaping bonuses (in the same format as figure 2). Upper) The plots show different aspects of the TD error 8 as a function of time t within a trial (first three plots) or as a function of number T of trials (last two). Here, the shaping bonus comes from a if>(t) = 0 for the first two timesteps a stimulus is presented within a trial (t=1;2), and 0 thereafter, irrespective of trial number. The learning rate was to = 0.3. Lower) The same plots for to = 0 In answer to this concern, Ng et al (1999) invented the idea of non-distorting shaping bonuses. Ng et aI's shaping bonuses are guaranteed not to distort optimal policies, although they can still change the exploratory behavior of agents. This guarantee comes because a shaping bonus is derived from a potential function ¢(x) of a state, distorting the TD error to c5(t) = r(t) + ¢(x(t + 1)) - ¢(x(t)) + v(t + 1) - v(t) (4) The difference from the novelty bonus of equation 3 is that the bonus comes from the difference between the potential functions for one state and the previous state, and they thus cancel themselves out when summed over a trial. Shaping bonuses must remain constant for the guarantee about the policies to hold. The upper plots in figure 3 show the effect of shaping bonuses on the TD error. Here, the potential function is set to the value 1 for the first two time steps of a stimulus in a trial, and 0 otherwise. The most significant difference between shaping and novelty bonuses is that the former exhibits a negative transient even in the very first trial, whereas, for the latter, it is a learned effect. If the learning rate is non-zero, then shaping bonuses are exactly predicted away over the course of normal learning. Thus, even though the same bonus is provided on trial 25 as trial 1, the TD error becomes 0 since the shaping bonus is predicted away. The dynamics of the decay shown in the last two plots is controlled by the learning rate for TD. The lower plots show what happens if learning is switched off at the time the shaping bonus is provided - this would be the case if the system responsible for computing the bonus takes its effect before the inputs associated with the stimulus are plastic. In this case, the shaping bonus is preserved. The final category of bonus is an ongoing exploration bonus (Sutton, 1990; Dayan & Sejnowski, 1996) which is used to ensure continued exploration. Sutton (1990) suggested adding to the estimated value of each state (or each state-action pair), a number proportional to the length of time since it was last visited. This ultimately makes it irresistible to go and visit states that have not been visited for a long time. Dayan & Sejnowski (1996) derived a bonus of this form from a model of environmental change that justifies the bonus. There is no evidence for this sort of continuing exploration bonus in the dopamine data, perhaps not surprisingly, since the tasks undertaken by the monkey offer little possibility for any trade-off between exploration and exploitation. ~~8 "171 ~r L-p~g_b_9g----, 1 d.1 E]~~ .I:~-d~-J o time 20 0 time 20 0 time 20 0 time 20 0 time 40 0 time 40 Figure 4: Activity 5(t) of the dopamine for partial predictability. del = delivered, pred = predicted. A;B) CS+ is presented with (A) or surprisingly, without (B) reward. C;D) CS- is presented without (C) or surprisingly, with (D) reward. On each trial, an initial stimulus (presented at t = 3 is ambiguous as to whether CS+ or CS- is presented (each occurs equally often), and the ambiguity is perfectly resolved at t = 4. E;F) The model shows the same behavior. Since the CS± comes at a random interval after the cue, the traces are stimulus locked to the relevant events. 5 Generalization Responses and Partial Observability Generalization responses (figure 10) show a persistent effect of stimuli that merely resemble a rewarded stimulus. However, animals do not terminally confuse normally rewarded and normally non-rewarded stimuli, since if a reward is provided in the latter case, then it engenders OA activity (as an unexpected reward should), and if it is not provided, then there is no depression (as would be the case if an expected reward was not delivered) (Schultz, 1998). One possibility is that this activity comes from a shaping bonus that is not learned away, as in the lower plots of figure 3. An alternative interpretation comes from partial observability. If the initial information from the world is ambiguous as to whether the stimulus is actually rewarding (door+, called CS+ trials) or nonrewarding (door-, called CS- trials), because of the similarity, then the animal should develop an initial expectation that there could be a reward (whose mean value is related to the degree of confusion). This should lead to a partial activation of the OA system. If the expectation is canceled by subsequent information about the stimulus (available, for instance, following a saccade), then the OA system will be inhibited below baseline exactly to nullify the earlier positive prediction. If the expectation is confirmed, then there will be continued activity representing the difference between the value of the reward and the expected value given the ambiguous stimulus. Figure 4 shows an example of this in a simplified case for which the animal receives information about the true stimulus over two timesteps, the first time step is ambiguous to the tune of 50%; the second perfectly resolves the ambiguity. Figures 4A;B show CS+ trials, with and without the delivery of reward; figures 4C;0 CS- trials, without and with the delivery of reward. The similarity of 4A;C to figure 10 is clear. Another instance of this generalization response is shown in figure IE. Here, an cue light (c±) is provided indicating whether a CS+ or a CS- (d±) is to appear at a random later time, which in turn is followed (or not) after a fixed interval by a reward (r±). OA cells show a generalization response to the cue light; and then fire to the CS+ or are unaffected by the CS-; and finally do not respond to the appropriate presence or absence of the cue. Figures 4E;F shows that this is exactly the behavior of the model. The OA response stimulus locked to CS+ arises because of the variability in the interval between the cue light and the CS+; if this interval were fixed, then the cells would only respond to the cue (c+), as in Schultz (1993). 6 Discussion We have suggested a set of interpretations for the activity of the OA system to add to that of reporting prediction error for reward. The two theoretically most interesting features are novelty and shaping bonuses. The former distort the reward function in such a way to encourage exploration of new stimuli, and new places. The latter are non-distorting, and can be seen as being multiplexed by the DA system together with the prediction error signal. Since shaping bonuses are not distorting they have no ultimate effect on action choice. However, the signal provided by the activation (and then cancellation) of DA can nevertheless have a significant neural effect. We suggest that DA release has unconditional effects in the ventral striatum (perhaps allowing stimuli to be read into pre-frontal working memory, Cohen et ai, 1998) and the dorsal striatum (perhaps engaging stimulus-directed approach and exploratory orienting behaviors, see Ikemoto & Panksepp (1999) for review). For stimuli that actually predict rewards (and so cause an initial activation of the DA system), these behaviors are often called appetitive; for novel, salient, and potentially important stimuli that are not known to predict rewards, they allow the system to pay appropriate attention. These effects of DA are unconditional, since they are hard-wired and not learned. In the case of partial observability, DA release due to the uncertain prediction of reward directly causes further investigation, and therefore resolution of the uncertainty. When unconditional and conditioned behaviors conflict, the former seem to dominate, as in the inability of animals to learn to run away from a stimulus in order to get food from it. The most major lacuna in the model is its lack of one or more opponent processes to DA that might report on punishments and the absence of predicted rewards. There is substantial circumstantial evidence that this might be one role for serotonin (which itself has unconditional effects associated with fear, fight, and flight responses that are opposite to those of DA), but there is not the physiological evidence to support or refute this possibility. Understanding the interaction of dopamine and serotonin in terms of their conditioned and unconditioned effects is a major task for future work. Acknowledgements Funding is from the NSF and the Gatsby Charitable Foundation. References [1] Bertsekas, DP & Tsitsitklis, IN (1996). Neuro-dynamic Programming. Cambridge, MA: Athena Scientific. [2] Cohen, JD, Braver, TS & O'Reilly, RC (1998). In AC Roberts, TW Robbins, editors, The Prefrontal Cortex: Executive and Cognitive Functions. Oxford: OUP. [3] Dayan, P, & Sejnowski, TJ (1996). Machine Learning, 25: 5-22. [4] Horvitz, Je, Stewart, T, & Jacobs, B, (1997). Brain Research, 759:251-258. [5] Ikemoto, S, & Panksepp, J, (1999). Brain Research Reviews, 31:6-41. [6] Montague, PR, Dayan, P, & Sejnowski, TJ, (1996). Journal of Neuroscience, 16:1936-1947. [7] Ng, AY, Harada, D, and Russell, S, (1999). Proceedings of the Sixteenth International Conference on Machine Learning. [8] Redgrave, P, Prescott, T, & Gurney, K (1999). Trends in Neurosciences, 22: 146-151. [9] Schultz, W, (1992). Seminars in the Neurosciences, 4: 129-138. [10] Schultz, W, (1998). Journal ofNeurophysiologJJ, 80: 1-27. [11] Schultz, W, Apicella, P, & Ljungberg, T, (1993). Journal of Neuroscience, 13: 900-913. [12] Schultz, W, Dayan, P, and Montague, PR, (1997). Science, 275: 1593-1599. [13] Schultz, W, & Romo, R, (1990). Journal of Neuroscience, 63: 607-624. [14] Sutton, RS, (1990). Machine Learning: Proceedings of the Seventh International Conference, 216-224. [15] Sutton, RS & Barto, AG (1998). Reinforcement Learning: An Introduction. Cambridge, MA: MIT Press.
2000
30
1,829
Sparse Greedy Gaussian Process Regression Alex J. Smola· RSISE and Department of Engineering Australian National University Canberra, ACT, 0200 Alex.Smola@anu.edu.au Abstract Peter Bartlett RSISE Australian National University Canberra, ACT, 0200 Peter.Bartlett@anu.edu.au We present a simple sparse greedy technique to approximate the maximum a posteriori estimate of Gaussian Processes with much improved scaling behaviour in the sample size m. In particular, computational requirements are O(n2m), storage is O(nm), the cost for prediction is 0 ( n) and the cost to compute confidence bounds is O(nm), where n «: m. We show how to compute a stopping criterion, give bounds on the approximation error, and show applications to large scale problems. 1 Introduction Gaussian processes have become popular because they allow exact Bayesian analysis with simple matrix manipulations, yet provide good performance. They share with Support Vector machines and Regularization Networks the concept of regularization via Reproducing Kernel Hilbert spaces [3], that is, they allow the direct specification of the smoothness properties of the class of functions under consideration. However, Gaussian processes are not always the method of choice for large datasets, since they involve evaluations of the covariance function at m points (where m denotes the sample size) in order to carry out inference at a single additional point. This may be rather costly to implement practitioners prefer to use only a small number of basis functions (Le. covariance function evaluations). Furthermore, the Maximum a Posteriori (MAP) estimate requires computation, storage, and inversion of the full m x m covariance matrix Kii = k( Xi, Xi) where Xl! ... ,xm are training patterns. While there exist techniques [2, 8] to reduce the computational cost of finding an estimate to O(km2 ) rather than O(m3 ) when the covariance matrix contains a significant number of small eigenvalues, all these methods still require computation and storage of the full covariance matrix. None of these methods addresses the problem of speeding up the prediction stage (except for the rare case when the integral operator corresponding to the kernel can be diagonalized analytically [8]). We devise a sparse greedy method, similar to those proposed in the context of wavelets [4], solutions of linear systems [5] or matrix approximation [7] that finds ·Supported by the DFG (Sm 62-1) and the Australian Research Council. an approximation of the MAP estimate by expanding it in terms of a small subset of kernel functions k (Xi, . ). Briefly, the technique works as follows: given a set of (already chosen) kernel functions, we seek the additional function that increases the posterior probability most. We add it to the set of basis functions and repeat until the maximum is approximated sufficiently well. A similar approach for a tight upper bound on the posterior probability gives a stopping criterion. 2 Gaussian Process Regression Consider a finite set X = {Xl.'" xm} of inputs. In Gaussian Process Regression, we assume that for any such set there is a covariance matrix K with elements Kij = k( Xi, Xj). We assume that for each input X there is a corresponding output y(x), and that these outputs are generated by y(x) = t(x) + e (1) where t(x) and e are both normal random variables, with e rv N(O, ( 2 ) and t = (t(Xl), ... , t(xm))T rv N(O, K). We can use Bayes theorem to determine the distribution of the output y(x) at a (new) input x. Conditioned on the data (X,y), the output y(x) is normally distributed. It follows that the mean of this distribution is the maximum a posteriori probability (MAP) estimate of y. We are interested in estimating this mean, and also the variance. It is possible to give an equivalent parametric representation of y that is more convenient for our purposes. We may assume that the vector y = (y(Xl)"" ,y(xm))T of outputs is generated by y=Ka+e, (2) where a rv N(O, K- 1 ) and e rv N(O, ( 21). Consequently the posterior probability p(aly, X) over the latent variables a is proportional to exp(-2;21Iy - Ka112) exp(-!a TKa) (3) and the conditional expectation of y(x) for a (new) location X is E[y(x)ly,X] = k T aopt, where k T denotes the vector (k( Xl. x), ... , k (xm, x)) and aopt is the value of a that maximizes (3). Thus, it suffices to compute aopt before any predictions are required. The problem of choosing the MAP estimate of a is equivalent to the problem of minimizing the negative log-posterior, minimize [-y T Ka + !a T (a2 K + KT K) a] (4) aEW" (ignoring constant terms and rescaling by ( 2 ). It is easy to show that the mean of the conditional distribution of y(x) is k T (K +(21)-ly, and its variance is k(x, x) + a 2 - k T (K + ( 21)-lk (see, for example, [2]). 3 Approximate Minimization of Quadratic Forms For Gaussian process regression, searching for an approximate solution to (4) relies on the assumption that a set of variables whose posterior probability is close to that of the mode of the distribution will be a good approximation for the MAP estimate. The following theorem suggests a simple approach to estimating the accuracy of an approximate solution to (4). It uses an idea from [2] in a modified, slightly more general form. Theorem 1 (Approximation Bounds for Quadratic Forms) Denote by K E lRmxm a positive semidefinite matrix, y, a E lRm and define the two quadratic forms 1 Q(a) := -y T Ka + _aT (a 2 K + KT K)a, (5) 2 1 Q*(a) := -y T a + _aT (a2 1 + K)a. (6) 2 Suppose Q and Q* have minima Qmin and Q:nn. Then for all a, a* E IRffl we have Q(a)::::: Qmin ::::: _~IIYI12 - a 2Q*(a*), (7) Q*(a*)::::: Q;',.in :::::a-2(_~IIYI12_Q(a)), (8) with equalities throughout when Q(a) = Qmin and Q*(a*) = Q;',.in. Hence, by minimizing Q* in addition to Q we can bound Q's closeness to the optimum and vice versa. Proof The minimum of Q(a) is obtained for aopt = (K + a21)-1y (which also minimizes Q*), hence 1 T 2 -1 * 1 T 2-1 Qmin = -2"Y K(K +a 1) y and Qmin = -2"Y (K +a 1) y. (9) This allows us to combine Qmin and Q;',.in to Qmin + a 2Q;',.in = _~llyI12. Since by definition Q (a) ::::: Qmin for all a (and likewise Q* (a*) ::::: Q;',.in for all a*) we may solve Qmin + a 2Q;',.in for either Q or Q* to obtain lower bounds for each of the two quantities. This proves (7) and (8). • Equation (7) is useful for computing an approximation to the MAP solution, whereas (8) can be used to obtain error bars on the estimate. To see this, note that in calculating the variance, the expensive quantity to compute is -k T (K +a21)-1k. However, this can be found by solving minimize [-k T a + ~a T (a21 + K) a] , (10) aEIRm and the expression inside the parentheses is Q*(a) with y = k (see (6)). Hence, an approximate minimizer of (10) gives an upper bound on the error bars, and lower bounds can be obtained from (8). 2 1 2 I . ·11 h . ( *) .- 2(Q(a)+u Q*(a*)+2liYli) • h n practice we W1 use t e quantly gap a, a .-Q(a)+u2Q * (a*)+~liYli2 ,I.e. t e relative size of the difference between upper and lower bound as stopping criterion. 4 A Sparse Greedy Algorithm The central idea is that in order to obtain a faster algorithm, one has to reduce the number of free variables. Denote by P E IRffl xn with m ::::: nand m,n E N an extension matrix (Le. p T is a projection) with p T P = 1. We will make the ansatz ap := P[3 where [3 E IRn (11) and find solutions [3 such that Q(ap) (or Q*(ap)) is minimized. The solution is [3opt = (pT (a2 K + K T K) p) -1 p T K T y. (12) Clearly if Pis ofrank m, this will also be the solution of (4) (the minimum negative log posterior for all a E IRffl ). In all other cases, however, it is an approximation. Computational Cost of Greedy Decompositions For a given P E IRffl xn let us analyze the computational cost involved in the estimation procedures. To compute (12) we need to evaluate pT Ky which is O(nm), (KP)T(KP) which is O(n2m) and invert an n x n matrix, which is O(n3 ). Hence the total cost is O(n2m). Predictions then cost only k T a which is O(n). Using P also to minimize Q*(P[3*) costs no more than O(n3 ), which is needed to upper-bound the log posterior. For error bars, we have to approximately minimize (10) which can done for a = P(3 at O(n3 ) cost. If we compute (PKpT)-l beforehand, this can be done by at O(n2 ) and likewise for upper bounds. We have to minimize -k T K P(3 + !(3T pT ((72 K + KT K)P(3 which costs O(n2m) (once the inverse matrices have been computed, one may, however, use them to compute error bars at different locations, too, thus costing only O(n2 )). The lower bounds on the error bars may not be so crucial, since a bad estimate will only lead to overly conservative confidence intervals and not have any other negative effect. Finally note that all we ever have to compute and store is K P, i.e. the m x n submatrix of K rather than K itself. Table 1 summarizes the scaling behaviour of several optimization algorithms. Exact Conjugate Optimal Sparse Sparse Greedy Solution Gradient [2] Decomposition Approximation Memory O(m~) O(m~) O(nm) O(nm) Initialization O(m;j) O(nm:l) O(n:lm) o (K.n:lm) Pred. Mean g~:~) g~~~2) O(n2 O(n) Error Bars O(n2m) or O(n2 ) O(K.n2m) or O(n2 ) Table 1: Computational Cost of Optimization Methods. Note that n <t:: m and also note that the n used in Conjugate Gradient, Sparse Decomposition, and Sparse Greedy Approximation methods will differ, with neG ::; nSD ::; nSGA since the search spaces are more restricted. K. = 60 gives near-optimal results. Sparse Greedy Approxhnation Several choices for P are possible, including choosing the principal components of K [8], using conjugate gradient descent to minimize Q [2], symmetric Cholesky factorization [1], or using a sparse greedy approximation of K [7]. Yet these methods have the disadvantage that they either do not take the specific form of y into account [8, 7] or lead to expansions that cost O(m) for prediction and require computation and storage of the full matrix [8, 2]. If we require a sparse expansion of y (x) in terms of k( Xi, x) (i.e. many ai in y = k T a will be 0) we must consider matrices P that are a collection of unit vectors ei (here (ei)j = Oij). We use a greedy approach to find a good approximation. First, for n = 1, we choose P = ei such that Q(P(3) is minimal. In this case we could permit ourselves to consider all possible indices i E {I, ... m} and find the best one by trying out all of them. Next assume that we have found a good solution P(3 where P contains n columns. In order to improve this solution, we may expand P into the matrix Pnew := [Pold, ei] E lRmx (n+1) and seek the best ei such that Pnew minimizes min,8 Q(Pnew(3). (Performing a full search over all possible n + 1 out of m indices would be too costly.) This greedy approach to finding a sparse approximate solution is described in Algorithm 1. The algorithm also maintains an approximate minimum of Q*, and exploits the bounds of Theorem 1 to determine when the approximation is sufficiently accurate. (N ote that we leave unspecified how the subsets M ~ I, M* ~ I* are chosen. Assume for now that we choose M = I, M* = I*, the full set of indices that have not yet been selected.) This method is very similar to Matching Pursuit [4] or iterative reduced set Support Vector algorithms [6], with the difference that the target to be approximated (the full solution a) is only given implicitly via Q( a). Approximation Quality Natarajan [5] studies the following Sparse Linear Approximation problem: Given A E lRmxn , b E lRm , E > 0, find X E lRn with minimal number of nonzero entries such that IIAx - bl1 2 ::; E. If we define A := (a2K + KTK)~ and b := A-1Ky, then we may write Q(o) = !llb - Aol12 + c where c is a constant independent of o. Thus the problem of sparse approximate minimization of Q(o) is a special case of Natarajan's problem (where the matrix A is square, symmetric, and positive definite). In addition, the algorithm considered by Natarajan in [5J involves sequentially choosing columns of A to maximally decrease IIAx - bll. This is clearly equivalent to the sparse greedy algorithm described above. Hence, it is straightforward to obtain the following result from Theorem 2 in [5J. Theorem 2 (Approximation Rate) Algorithm 1 achieves Q(o) ::; Q(oopt) + E when a has n::; I8n*~E/4)ln(IIA-1KYII) ).1 E non-zero components, where n*(E/4) is the minimal number of nonzero components in vectors a for which Q(o) ::; Q(oopt) + E/4, A = (a2K + KTK)1/2, and).l is the minimum of the magnitudes of the singular values of A, the matrix obtained by normalizing the columns of A. Randomized Algorithms for Subset Selection Unfortunately, the approximation algorithm considered above is still too expensive for large m since each search operation involves O(m) indices. Yet, if we are satisfied with finding a relatively good index rather than the best, we may resort to selecting a random subset of size K «: m. In Algorithm 1, this corresponds to choosing M ~ I, M* ~ 1* as random subsets of size K. In fact, a constant value of K will typically suffice. To see why, we recall a simple lemma from [7J: the cumulative distribution function of the maximum of m i.i.d. random variables 6, ... ,em is FO m , where F(·) is the cdf of ei. Thus, in order to find a column to add to P that is with probability 0.95 among the best 0.05 of all such columns, a random subsample of size ilogO.05/log0.951 = 59 will suffice. Algorithm 1 Sparse Greedy Quadratic Minimization. Require: Training data X = {Xl, ... , Xm}, Targets y, Noise a 2, Precision E Initialize index sets I,1* = {I, ... ,m}j S, S* = 0. repeat Choose M ~ I, M* ~ I*. Find arg milliEM Q ([P, eiJ,Bopt), argmilli"EM" Q* OP*, ei" J,B~Pt)· Move i from I to S, i* from 1* to S*. Set P:= [P,eiJ, P*:= [P*,ei"J. until Q(P,Bopt} + a2Q*(P,B~Pt) + !llyl12 ::; HIQ(P,Bopt} I + la2Q*(P,B~Pt) +! IIYl121 Output: Set of indices S, ,Bopt, (pTKP)-t, and (pT(KTK +a2K)p)-1. Numerical Considerations The crucial part is to obtain the values of Q(P,Bopt} cheaply (with P = [Pold, eiJ), provided we solved the problem for Pold. From (12) one can see that all that needs to be done is a rank-I update on the inverse. In the following we will show that this can be obtained in O(mn) operations, provided the inverse of the smaller subsystem is known. Expressing the relevant terms using Pold and ki we obtain pTKT y [Pold,eiJTKT y = (PoidKT y,kJ y) [ Poid (KT K + a2 K) Pold pJd (KT + a21) ~ 1 pT (KTK +a2K) P kJ(K +a21)Pold kJki + a2Kii Thus computation of the terms costs only O(nm), given the values for Pold' Furthermore, it is easy to verify that we can write the inverse of a symmetric positive semidefinite matrix as (13) where 'Y := (C + BT A -1 B)-1. Hence, inversion of pT (KT K + a 2 K) P costs only O(n2 ). Thus, to find P of size m x n takes O(ltn2m) time. For the error bars, (pT KP)-1 will generally be a good starting value for the minimization of (10), so the typical cost for (10) will be O(Tmn) for some T < n, rather than O(mn2 ). Finally, for added numerical stability one may want to use an incremental Cholesky factorization in (13) instead of the inverse of a matrix. 5 Experiments and Discussion We used the Abalone dataset from the VCI Repository to investigate the properties of the algorithm. The dataset is of size 4177, split into 4000 training and 177 testing split to analyze the numerical performance, and a (3000,1177) split to assess the generalization error (the latter was needed in order to be able to invert (and keep in memory) the full matrix K + a 21 for a comparison). The data was rescaled to zero mean and unit variance coordinate-wise. Finally, the gender encoding in Abalone (male/female/infant) was mapped into {(I, 0, 0), (0, 1, 0), (0,0, I)}. IIx-x'1I2 In all our experiments we used Gaussian kernels k(x, x') = exp( -~) as covariance kernels. Figure 1 analyzes the speed of convergence for different It. Figure 1: Speed of Convergence. We plot the size of the gap between upper and lower bound of the log posterior (gap( a, a*)) for the first 4000 samples from the Abalone dataset (a2 = 0.1 and 2w2 = 10). From top to bottom: subsets of size 1, 2, 5, 10, 20, 50, 100, 200. The results were averaged over 10 runs. The relative variance of the gap size was less than 10%. One can see that that subsets of size 1O-' O'--------,L----,L------,o'---------"c---,L------'-,-------,-L---,L------'-,------,-! 50 and above ensure rapid conver20 40 60 80 100 120 140 160 180 200 Number of Ilerahons gence. For the optimal parameters (2a2 = 0.1 and 2w2 = 10, chosen after [7]) the average test error of the sparse greedy approximation trained until gap(a, a*) < 0.025 on a (3000,1177) split (the results were averaged over ten independent choices of training sets.) was 1.785 ± 0.32, slightly worse than for the GP estimate (1.782 ± 0.33). The log posterior was -1.572.105(1 ± 0.005), the optimal value -1.571 . 105(1 ± 0.005). Hence for all practical purposes full inversion of the covariance matrix and the sparse greedy approximation have statistically indistinguishable generalization performance. In a third experiment (Table 2) we analyzed the number of basis functions needed to minimize the log posterior to gap(a, a*) < 0.025, depending on different choices of the kernel width a. In all cases, less than 10% of the kernel functions suffice to find a good minimizer of the log posterior, for the error bars, even less than 2% are sufficient. This is a dramatic improvement over previous techniques. Kernel width 2w:& 1 2 5 10 20 50 Kernels for log-posterior 373 287 255 257 251 270 Kernels for error bars 79±61 49±43 26±27 17±16 12±9 8±5 Table 2: Number of basis functions needed to minimize the log posterior on the Abalone dataset (4000 training samples), depending on the width of the kernel w. Also, number of basis functions required to approximate k T (K + 0-21)-lk which is needed to compute the error bars. We averaged over the remaining 177 test samples. To ensure that our results were not dataset specific and that the algorithm scales well we tested it on a larger synthetic dataset of size 10000 in 20 dimensions distributed according to N(O, 1). The data was generated by adding normal noise with variance 0-2 = 0.1 to a function consisting of 200 randomly chosen Gaussians of width 2w2 = 40 and normally distributed coefficients and centers. We purposely chose an inadequate Gaussian process prior (but correct noise level) of Gaussians with width 2w2 = 10 in order to avoid trivial sparse expansions. After 500 iterations (i.e. after using 5% of all basis functions) the size of the gap(cr, cr·) was less than 0.023 (note that this problem is too large to be solved exactly). We believe that sparse greedy approximation methods are a key technique to scale up Gaussian Process regression to sample sizes of 10.000 and beyond. The techniques presented in the paper, however, are by no means limited to regression. Work on the solutions of dense quadratic programs and classification problems is in progress. The authors thank Bob Williamson and Bernhard Sch6lkopf. References [1] S. Fine and K Scheinberg. Efficient SVM training using low-rank kernel representation. Technical report, IBM Watson Research Center, New York, 2000. [2] M. Gibbs and D.J .C. Mackay. Efficient implementation of gaussian processes. Technical report, Cavendish Laboratory, Cambridge, UK, 1997. [3] F. Girosi. An equivalence between sparse approximation and support vector machines. Neural Computation, 10(6):1455-1480, 1998. [4] S. Mallat and Z. Zhang. Matching Pursuit in a time-frequency dictionary. IEEE Transactions on Signal Processing, 41:3397-3415, 1993. [5] B. K Natarajan. Sparse approximate solutions to linear systems. SIAM Journal of Computing, 25(2):227-234, 1995. [6] B. Sch6lkopf, S. Mika, C. Burges, P. Knirsch, K-R. Miiller, G. Ratsch, and A. Smola. Input space vs. feature space in kernel-based methods. IEEE Transactions on Neural Networks, 10(5):1000 - 1017, 1999. [7] A.J . Smola and B. Sch6lkopf. Sparse greedy matrix approximation for machine learning. In P. Langley, editor, Proceedings of the 17th International Conference on Machine Learning, pages 911 - 918, San Francisco, 2000. Morgan Kaufman. [8] C.KI. Williams and M. Seeger. The effect of the input density distribution on kernelbased classifiers. In P. Langley, editor, Proceedings of the Seventeenth International Conference on Machine Learning, pages 1159 - 1166, San Francisco, California, 2000. Morgan Kaufmann.
2000
31
1,830
Large Scale Bayes Point Machines Ralf Herbrich Statistics Research Group Computer Science Department Technical University of Berlin ralfh@cs.tu-berlin.de Thore Graepel Statistics Research Group Computer Science Department Technical University of Berlin guru@cs.tu-berlin.de Abstract The concept of averaging over classifiers is fundamental to the Bayesian analysis of learning. Based on this viewpoint, it has recently been demonstrated for linear classifiers that the centre of mass of version space (the set of all classifiers consistent with the training set) also known as the Bayes point exhibits excellent generalisation abilities. However, the billiard algorithm as presented in [4] is restricted to small sample size because it requires o (m2 ) of memory and 0 (N . m2 ) computational steps where m is the number of training patterns and N is the number of random draws from the posterior distribution. In this paper we present a method based on the simple perceptron learning algorithm which allows to overcome this algorithmic drawback. The method is algorithmically simple and is easily extended to the multi-class case. We present experimental results on the MNIST data set of handwritten digits which show that Bayes point machines (BPMs) are competitive with the current world champion, the support vector machine. In addition, the computational complexity of BPMs can be tuned by varying the number of samples from the posterior. Finally, rejecting test points on the basis of their (approximative) posterior probability leads to a rapid decrease in generalisation error, e.g. 0.1% generalisation error for a given rejection rate of 10%. 1 Introduction Kernel machines have recently gained a lot of attention due to the popularisation of the support vector machine (SVM) [13] with a focus on classification and the revival of Gaussian Processes (GP) for regression [15]. Subsequently, SVMs have been modified to handle regression [12] and GPs have been adapted to the problem of classification [8]. Both schemes essentially work in the same function space that is characterised by kernels (SVM) and covariance functions (GP), respectively. While the formal similarity of the two methods is striking the underlying paradigms of inference are very different. The SVM was inspired by results from statistical/PAC learning theory while GPs are usually considered in a Bayesian framework. This ideological clash can be viewed as a continuation in machine learning of the by now classical disagreement between Bayesian and frequentistic statistics. With regard to algorithmics the two schools of thought appear to favour two different methods of learning and predicting: the SVM community as a consequence of the formulation of the SVM as a quadratic programming problem focuses on learning as optimisation while the Bayesian community favours sampling schemes based on the Bayesian posterior. Of course there exists a strong relationship between the two ideas, in particular with the Bayesian maximum a posteriori (MAP) estimator being the solution of an optimisation problem. Interestingly, the two viewpoints have recently been reconciled theoretically in the so-called PAC-Bayesian framework [5] that combines the idea of a Bayesian prior with PAC-style performance guarantees and has been the basis of the so far tightest margin bound for SVMs [3]. In practice, optimisation based algorithms have the advantage of a unique, deterministic solution and the availability of the cost function as an indicator for the quality of the solution. In contrast, Bayesian algorithms based on sampling and voting are more flexible and have the so-called "anytime" property, providing a relatively good solution at any point in time. Often, however, they suffer from the computational costs of sampling the Bayesian posterior. In this contribution we review the idea of the Bayes point machine (BPM) as an approximation to Bayesian inference for linear classifiers in kernel space in Section 2. In contrast to the GP viewpoint we do not define a Gaussian prior on the length Ilwllx: of the weight vector. Instead, we only consider weight vectors of length Ilwllx: = 1 because it is only the spatial direction of the weight vector that matters for classification. It is then natural to define a uniform prior on the resulting ballshaped hypothesis space. Hence, we determine the centre of mass ("Bayes point") of the resulting posterior that is uniform in version space, i.e. in the zero training error region. While the version space could be sampled using some form of Gibbs sampling (see, e.g. [6] for an overview) or an ergodic dynamic system such as a billiard [4] we suggest to use the perceptron algorithm trained on permutations of the training set for sampling in Section 3. This extremely simple sampling scheme proves to be efficient enough to make the BPM applicable to large data sets. We demonstrate this fact in Section 4 on the well-known MNIST data set containing 60 000 samples of handwritten digits and show how an approximation to the posterior probability of classification provided by the BPM can even be used for test-point rejection leading to a great reduction in generalisation error on the remaining samples. We denote n-tuples by italic bold letters (e.g. x = (Xl, ... ,xn )), vectors by roman bold letters (e.g. x), random variables by sans serif font (e.g. X) and vector spaces by calligraphic capitalised letters (e.g. X). The symbols P, E and I denote a probability measure, the expectation of a random variable and the indicator function, respectively. 2 Bayes Point Machines Let us consider the task of classifying patterns X E X into one of the two classes y E Y = {-1, + 1} using functions h : X ~ Y from a given set 1t known as the hypothesis space. In this paper we shall only be concerned with linear classifiers: 1t={xf-tsign((¢(x),w)x;) IWEW}, W={wEK I Ilwllx:=1}, (1) where ¢ : X ~ K ~ i~ is known I as the feature map and has to fixed beforehand. If all that is needed for learning and classification are the inner products (., .)x: in the feature space K, it is convenient to specify ¢ only by its inner product function 1 For notational convenience we shall abbreviate cf> (x) by x. This should not be confused with the set x of training points. k : X X X -t IR known as the kernel, i.e. "Ix, x' EX: k (x, x') = (¢ (x) , ¢ (x')}JC . For simplicity, let us assume that there exists a classifier2 w* E W that labels all our data, i.e. PYlx=x,w=w' (y) = Ih_.(x)=y. (2) This assumption can easily be relaxed by introducing slack variables as done in the soft margin variant of the SVM. Then given a training set z = (x, y) of m points Xi together with their classes Yi assigned by hw' drawn iid from an unknown data distribution Pz = PYIXPX we can assume the existence of a version space V (z), i.e. the set of all classifiers w E W consistent with z: (3) In a Bayesian spirit we incorporate all of our prior knowledge about w* into a prior distribution Pw over W. In the absence of any a priori knowledge we suggest a uniform prior over the spatial direction of weight vectors w. Now, given the training set z we update our prior belief by Bayes' formula, i.e. Pzmlw=w (z) Pw (w) 0:1 PYIX=Xi,W=W (Yi) Pw (W) Pw1zm=z (W) = = -=-=~~----''-'-----':'::''''':'----:~c-'Ew [PzmIW=w (Z)] Ew [0:1 PY1X=Xi,W=W (Yi)] ifwEV(Z) { Pw(w) ~w(V(z)) otherwise where the first line follows from the independence and the fact that x has no dependence on w and the second line follows from (2) and (3). The Bayesian classification of a novel test point x is then given by Bayesz (x) = argmaxyEy Pw1zm=z ({hw (x) = y}) = sign (EWlzm=z [hw (x)]) = sign (Ew1zm=z [sign ((x, W}dD Unfortunately, the strategy Bayesz is in general not contained in the set 1-l of classifiers considered beforehand. Since Pw1zm=z is only non-zero inside version space, it has been suggested to use the centre of mass w crn as an approximation for Bayesz , i.e. wcrn sign (Ew1zm=z [(x, W}JCl) sign ((x, wcrn}d , EWlzm=z [W] . (4) This classifier is called the Bayes point. In a previous work [4] we calculated Wcrn using a first order Markov chain based on a billiard-like algorithm (see also [10]). We entered the version space V (z) using a perceptron algorithm and started playing billiards in version space V (z) thus creating a sequence of pseudo-random samples Wi due to the chaotic nature of the billiard dynamics. Playing billiards in V (z) is possible because each training point (Xi, Yi) E z defines a hyperplane {w E W I Yi (Xi, w}JC = O} ~ W. Hence, the version space is a convex polyhedron on the surface of W. After N bounces of the billiard ball the Bayes point was estimated by 1 N \Vcrn = N LWi. i=1 2We synonymously call h E 11. and w E W a classifier because there is a one-to-one correspondence between the two by virtue of (1). Although this algorithm shows excellent generalisation performance when compared to state-of-the art learning algorithms like support vector machines (SVM) [13], its effort scales like 0 (m 2 ) and 0 (N . m2 ) in terms of memory and computational requirements, respectively. 3 Sampling the Version Space Clearly, all we need for estimating the Bayes point (4) is a set of classifiers W drawn uniformly from V (z). In order to save computational resources it might be advantageous to achieve a uniform sample only approximately. The classical perceptron learning algorithm offers the possibility to obtain up to m! different classifiers in version space simply by learning on different permutations of the training set. Given a permutation II : {I, ... , m} -+ {I, ... , m} the perceptron algorithm works as follows: 1. Start with Wo = 0 and t = O. 2. For all i E {I, ... , m}, if YII(i) (XII(i), Wt) K. :::; 0 then Wt+! = Wt + YII(i) XII (i) and t ~ t + 1. 3. Stop, if for all i E {I, ... ,m}, YII(i) (XII(i), Wt) K. > O. A classical theorem due to Novikoff [7] guarantees the convergence of this procedure and furthermore provides an upper bound on the number t of mistakes needed until convergence. More precisely, if there exists a classifier WSVM with margin ( ) . Yi(Xi,WSVM)K. 'Y% WSVM = mIll (Xi ,y;)E% IlwsVM 11K. then the number of mistakes until convergence which is an upper bound on the sparsity of the solution is not more than R2 (x) y;2 (WSVM), where R (x) is the smallest real number such that V x Ex: II ¢ (x) II K. :::; R (x). The quantity 'Y% (WSVM) is maximised for the solution WSVM found by the SVM, and whenever the SVM is theoretically justified by results from learning theory (see [11, 13]) the ratio d = R2 (x) 'Y;2 (WSVM) is considerably less than m, say d« m. Algorithmically, we can benefit from this sparsity by the following "trick": since m W = 2: QiXi i=l all we need to store is the m-dimensional vector o. Furthermore, we keep track of the m-dimensional vector 0 of real valued outputs m 0i = Yi (Xi, Wt)K. = 2: Qjk (Xi, Xj) j=l of the current solution at the i-th training point. By definition, in the beginning 0 = 0=0. Now, if 0i :::; 0 we update Qi by Qi +Yi and update 0 by OJ ~ OJ +Yik (Xi, Xj) which requires only m kernel calculations. In summary, the memory requirement of this algorithm is 2m and the number of kernel calculations is not more than d·m. As a consequence, the computational requirement of this algorithm is no more than the computational requirement for the evaluation ofthe margin 'Y% (WSVM)! We suggest to use this efficient perceptron learning algorithm in order to obtain samples Wi for the computation of the Bayes point by (4). (a) (b) (c) Figure 1: (a) Histogram of generalisation errors (estimated on a test set) using a kernel Gibbs sampler. (b) Histogram of generalisation errors (estimated on a test set) using a kernel perceptron. (c) QQ plot of distributions (a) and (b). The straight line indicates that both distribution are very similar. In order to investigate the usefulness of this approach experimentally, we compared the distribution of generalisation errors of samples obtained by perceptron learning on permuted training sets (as suggested earlier by [14]) with samples obtained by a full Gibbs sampling [2]. For computational reasons, we used only 188 training patterns and 453 test patterns of the classes "I" and "2" from the MNIST data set3 . In Figure 1 (a) and (b) we plotted the distribution over 1000 random samples using the kernel4 k(x,x') = «(x,x'h+1)5 . (5) Using a quantile-quantile (QQ) plot technique we can compare both distributions in one graph (see Figure 1 (c)). These plots suggest that by simple permutation of the training set we are able to obtain a sample of classifiers exhibiting the same generalisation error distribution as with time-consuming Gibbs sampling. 4 Experimental Results In our large scale experiment we used the full MNIST data set with 60000 training examples and 10000 test examples of 28 x 28 grey value images of handwritten digits. As input vector x we used the 784 dimensional vector of grey values. The images were labelled by one of the ten classes "0" to "I". For each of the ten classes y = {O, ... , 9} we ran the perceptron algorithm N = 10 times each time labelling all training points of class y by + 1 and the remaining training points by -1. On an Ultra Sparc 10 each learning trial took approximately 20 - 30 minutes. For the classification of a test image x we calculated the real-valued output of all 100 different classifiers5 by Ii (x) = where we used the kernel k given by (5). (Oi)j refers to the expansion coefficient corresponding to the i- th classifier and the j - th data point. Now, for each of the 3available at http://wvw .research. att. comryann/ocr/mnist/. 4We decided to use this kernel because it showed excellent generalisation performance when using the support vector machine. 5For notational simplicity we assume that the first N classifiers are classifiers for the class "0", the next N for class "1" and so on. rejection rate generalisation error 0% 1.46% 1% 1.10% 2% 0.87% 3% 0.67% 4% 0.49% 5% 0.37% 6% 0.32% 7% 0.26% 8% 0.21% 004 rejection rate OOB 010 9% 0.14% 10% 0.11% Figure 2: Generalisation error as a function of the rejection rate for the MNIST data set. The SVM achieved 1.4% without rejection as compared to 1.46% for the BPM. Note that by rejection based on the real-valued output the generalisation error could be reduced to 0.1% indicating that this measure is related to the probability of misclassification of single test points. ten classes we calculated the real-valued decision of the Bayes point Wy by 1 N ibp,y (x) = N L: ii+yN (x) . i=l In a Bayesian spirit, the final decision was carried out by hbp (x) = argmaxyE{O, ... ,9} ibp,y (x) . Note that ibp,y (x) [9] can be interpreted as an (unnormalised) approximation of the posterior probability that x is of class y when restricted to the function class (1). In order to test the dependence of the generalisation error on the magnitude maxy ibp,y (x) we fixed a certain rejection rate r E [0,1] and rejected the set of r· 10000 test points with the smallest value of maxy ibp,y (x). The resulting plot is depicted in Figure 2. As can be seen from this plot, even without rejection the Bayes point has excellent generalisation performance6 . Furthermore, rejection based on the real-valued output ibp (x) turns out to be excellent thus reducing the generalisation error to 0.1%. One should also bear in mind that the learning time for this simple algorithm was comparable to that of SVMs. A very advantageous feature of our approach as compared to SVMs are its adjustable time and memory requirements and the "anytime" availability of a solution due to sampling. If the training set grows further and we are not able to spend more time with learning, we can adjust the number N of samples used at the price of slightly worse generalisation error. 5 Conclusion In this paper we have presented an algorithm for approximating the Bayes point by rerunning the classical perceptron algorithm with a permuted training set. Here we 6Note that the best know result on this data set if 1.1 achieved with a polynomial kernel of degree four. Nonetheless, for reason of fairness we compared the results of both algorithms using the same kernel. particularly exploited the sparseness of the solution which must exist whenever the success of the SVM is theoretically justified. The restriction to the zero training error case can be overcome by modifying the kernel as k>.. (x, x') = k (x, x') + A · Ix=x' . This technique is well known and was already suggested by Vapnik in 1995 (see [1]). Another interesting question raised by our experimental findings is the following: By how much is the distribution of generalisation errors over random samples from version space related to the distribution of generalisation errors of the up to m! different classifiers found by the classical perceptron algorithm? Acknowledgements We would like to thank Bob Williamson for helpful discussions and suggestions on earlier drafts. Parts of this work were done during a research stay of both authors at the ANU Canberra. References [1) C. Cortes and V. Vapnik. Support Vector Networks. Machine Learning, 20:273-297, 1995. [2) T. Graepel and R. Herbrich. The kernel Gibbs sampler. In Advances in Neural Information System Processing 13, 200l. [3) R. Herbrich and T. Graepel. A PAC-Bayesian margin bound for linear classifiers: Why SVMs work. In Advances in Neural Information System Processing 13, 200l. [4) R. Herbrich, T. Graepel, and C. Campbell. Robust Bayes Point Machines. In Proceedings of ESANN 2000, pages 49- 54, 2000. [5) D. A. McAliester. Some PAC Bayesian theorems. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, pages 230- 234, Madison, Wisconsin, 1998. [6) R. M. Neal. Markov chain monte carlo method based on 'slicing' the density function. Technical report, Department of Statistics, University of Toronto, 1997. TR- 9722. [7) A. Novikoff. On convergence proofs for perceptrons. In Report at the Symposium on Mathematical Theory of Automata, pages 24- 26, Politechnical Institute Brooklyn, 1962. [8) M. Opper and O. Winther. Gaussian processes for classification: Mean field algorithms. Neural Computation, 12(11), 2000. [9) J. Platt. Probabilities for SV machines. In Advances in Large Margin Classifiers, pages 61- 74. MIT Press, 2000. [10) P. Rujan and M. Marchand. Computing the bayes kernel classifier. In Advances in Large Margin Classifiers, pages 329- 348. MIT Press, 2000. [11) J. Shawe-Taylor, P. L. Bartlett, R. C. Williamson, and M. Anthony. Structural risk minimization over data- dependent hierarchies. IEEE Transactions on Information Theory, 44(5):1926- 1940, 1998. [12) A. J. Smola. Learning with Kernels. PhD thesis, Technische Universitat Berlin, 1998. [13) V. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995. [14) T. Watkin. Optimal learning with a neural network. Europhysics Letters, 21:871- 877, 1993. [15) C. Williams. Prediction with Gaussian Processes: From linear regression to linear prediction and beyond. Technical report, Neural Computing Research Group, Aston University, 1997. NCRG/ 97/ 012.
2000
32
1,831
Efficient Learning of Linear Perceptrons Shai Ben-David Department of Computer Science Technion Haifa 32000, Israel shai~cs.technion.ac.il Hans Ulrich Simon Fakultat fur Mathematik Ruhr Universitat Bochum D-44780 Bochum, Germany simon~lmi.ruhr-uni-bochum.de Abstract We consider the existence of efficient algorithms for learning the class of half-spaces in ~n in the agnostic learning model (Le., making no prior assumptions on the example-generating distribution). The resulting combinatorial problem - finding the best agreement half-space over an input sample - is NP hard to approximate to within some constant factor. We suggest a way to circumvent this theoretical bound by introducing a new measure of success for such algorithms. An algorithm is IL-margin successful if the agreement ratio of the half-space it outputs is as good as that of any half-space once training points that are inside the IL-margins of its separating hyper-plane are disregarded. We prove crisp computational complexity results with respect to this success measure: On one hand, for every positive IL, there exist efficient (poly-time) IL-margin successful learning algorithms. On the other hand, we prove that unless P=NP, there is no algorithm that runs in time polynomial in the sample size and in 1/ IL that is IL-margin successful for all IL> O. 1 Introduction We consider the computational complexity of learning linear perceptrons for arbitrary (Le. non -separable) data sets. While there are quite a few perceptron learning algorithms that are computationally efficient on separable input samples, it is clear that 'real-life' data sets are usually not linearly separable. The task of finding a linear perceptron (i.e. a half-space) that maximizes the number of correctly classified points for an arbitrary input labeled sample is known to be NP-hard. Furthermore, even the task of finding a half-space whose success rate on the sample is within some constant ratio of an optimal one is NP-hard [1]. A possible way around this problem is offered by the support vector machines paradigm (SVM) . In a nutshell, the SVM idea is to replace the search for a linear separator in the feature space of the input sample, by first embedding the sample into a Euclidean space of much higher dimension, so that the images of the sample points do become separable, and then applying learning algorithms to the image of the original sample. The SVM paradigm enjoys an impressive practical success, however, it can be shown ([3]) that there are cases in which such embeddings are bound to require high dimension and allow only small margins, which in turn entails the collapse of the known generalization performance guarantees for such learning. We take a different approach. While sticking with the basic empirical risk minimization principle, we propose to replace the worst-case-performance analysis by an alternative measure of success. The common definition of the approximation ratio of an algorithm, requires the profit of an algorithm to remain within some fixed ratio from that of an optimal solution for all inputs, we allow the relative quality of our algorithm to vary between different inputs. For a given input sample, the number of points that the algorithm's output half-space should classify correctly relates not only to the success rate of the best possible half-space, but also to the robustness of this rate to perturbations of the hyper-plane. This new success requirement is intended to provide a formal measure that, while being achievable by efficient algorithms, retains a guaranteed quality of the output 'whenever possible'. The new success measure depends on a margin parameter p,. An algorithm is called p,-margin successful if, for any input labeled sample, it outputs a hypothesis half-space that classifies correctly as many sample points as any half-space can classify correctly with margin p, (that is, discounting points that are too close to the separating hyper-plane). Consequently, a p,-margin successful algorithm is required to output a hypothesis with close-to-optimal performance on the input data (optimal in terms of the number of correctly classified sample points), whenever this input sample has an optimal separating hyper-plane that achieves larger-than-p, margins for most of the points it classifies correctly. On the other hand, if for every hyper-plane h that achieves close-to-maximal number of correctly classified input points, a large percentage of the correctly classified points are close to h's boundary, then an algorithm can settle for a relatively poor success ratio without violating the p,-margin success criterion. We obtain a crisp analysis of the computational complexity of perceptron learning under the p,-margin success requirement: On one hand, for every p, > 0 we present an efficient p,-margin successful learning algorithm (that is, an algorithm that runs in time polynomial in both the input dimension and the sample size). On the other hand, unless P=NP, no algorithm whose running time is polynomial in the sample size and dimension and in 1/ p, can be p,-margin successful for all p, > O. Note, that by the hardness of approximating linear perceptrons result of [1] cited above, for p, = 0, p,-margin learning is NP hard (even NP-hard to approximate). We conclude that the new success criterion for learning algorithms provides a rigorous success guarantee that captures the constraints imposed on perceptron learning by computational efficiency requirements. It is well known by now that margins play an important role in the analysis of generalization performance (or sample complexity). The results of this work demonstrate that a similar notion of margins is a significant component in the determination of the computational complexity of learning as well. Due to lack of space, in this extended abstract we skip all the technical proofs. 2 Definition and Notation We shall be interested in the problem of finding a half-space that maximizes the agreement with a given labeled input data set. More formally, Best Separating Hyper-plane (BSH) Inputs are of the form (n, S), where n 2: 1, and S = {(Xl, 17d, ... , (Xm, 17m)} is finite labeled sample, that is, each Xi is a point in lRn and each 17i is a member of {+1, -I}. A hyper-plane h(w, t), where w E lRn and t E lR, correctly classifies (X, 17) if sign( < wx > -t) = 17 where < wx > denotes the dot product of the vectors w and x. We define the profit of h = h(w, t) on S as fi (hiS) = l{(xi,17i): h correctly classifies (Xi, 17i)}1 pro t lSI The goal of a Best Separating Hyper-plane algorithm is to find a pair (w, t) so that profit(h(w, t)IS) is as large as possible. In the sequel, we refer to an input instance with parameter n as a n-dimensional input. On top of the Best Separating Hyper-plane problem we shall also refer to the following combinatorial optimization problems: Best separating Homogeneous Hyper-plane (BSHH) - The same problem as BSH, except that the separating hyper-plane must be homogeneous, that is, t must be set to zero. The restriction of BSHH to input points from sn-l, the unit sphere in lRn, is called Best Separating Hemisphere Problem (BSHem) in the sequel. Densest Hemisphere (DHem) Inputs are of the form (n, P), where n 2: 1 and P is a list of (not necessarily different) points from sn-l - the unit sphere in lRn. The problem is to find the Densest Hemisphere for P, that is, a weight vector wE lRn such that H+(w, 0) contains as many points from P as possible (accounting for their multiplicity in P). Densest Open Ball (DOB) Inputs are of the form (n, P), where n 2: 1, and P is a list of points from lRn. The problem is to find the Densest Open Ball of radius 1 for P, that is, a center z E lRn such that B(z, 1) contains as many points from P as possible (accounting for their multiplicity in P). For the sake of our proofs, we shall also have to address the following well studied optimization problem: MAX-E2-SAT Inputs are of the form (n, C), where n 2: 1 and C is a collection of 2-clauses over n Boolean variables. The problem is to find an assignment a E {O, l}n satisfying as many 2-clauses of C as possible. More generally, a maximization problem defines for each input instance 1 a set of legal solutions, and for each (instance, legal-solution) pair (I, a), it defines profit (I, a) E lR+ - the profit of a on I. For each maximization problem II and each input instance 1 for II, optrr (I) denotes the maximum profit that can be realized by a legal solution for I. Subscript II is omitted when this does not cause confusion. The profit realized by an algorithm A on input instance 1 is denoted by A(I). The quantity opt (I) - A(I) opt (I) is called the relative error of algorithm A on input instance I. A is called 0approximation algorithm for II, where 0 E R+, if its relative error on I is at most 0 for all input instances I. 2.1 The new notion of approximate optimization: JL-margin approximation As mentioned in the introduction, we shall discuss a variant of the above common notion of approximation for the best separating hyper-plane problem (as well as for the other geometric maximization problems listed above). The idea behind this new notion, that we term 'JL-margin approximation', is that the required approximation rate varies with the structure of the input sample. When there exist optimal solutions that are 'stable', in the sense that minor variations to these solutions will not effect their cost, then we require a high approximation ratio. On the other hand, when all optimal solutions are 'unstable' then we settle for lower approximation ratios. The following definitions focus on separation problems, but extend to densest set problems in the obvious way. Definition 2.1 Given a hypothesis class 11. = Un lin , where each li n is a collection of subsets of Rn, and a parameter JL ~ 0, • A margin function is a function M : Un(lin x Rn) r-+ R+. That is, given a hypothesis h C Rn and a point x E Rn, M (h, x) is a non-negative real number - the margin of x w. r. t. h. In this work, in most cases M (h, x) is the Euclidean distance between x and the boundary of h, normalized by IIxl12 and, for linear separators, by the 2-norm of the hyper-plane h as well. • Given a finite labeled sample S and a hypothesis h E li n , the profit realized by h on S with margin JL is profit(hIS, JL) = I{(Xi,1Ji): h correctly classifies (Xi,1Ji) and M(h, Xi) ~ JL} I lSI • For a labeled sample S, let optl-'(S) ~f maxhEl£(profit(hIS,JL)) • h E lin is a JL-margin approximation for S w.r.t. 11. if profit(hIS) > optl-' (S). • an algorithm A is JL-successful for 11. if for every finite n-dimensional input S it outputs A(S) E lin which is a JL-margin approximation for S w.r.t. 11.. • Given any of the geometric maximization problem listed above, II, its JLrelaxation is the problem of finding, for each input instance of II a JL-margin approximation. For a given parameter JL > 0, we denote the JL-relaxation of a problem II by II[JL]. 3 Efficient J.1, - margin successful learning algorithms Our Hyper-plane learning algorithm is based on the following result of Ben-David, Eiron and Simon [2] Theorem 3.1 For every (constant) JL > 0, there exists a JL-margin successful polynomial time algorithm AI-' for the Densest Open Ball Problem. We shall now show that the existence of a J.L-successful algorithm for Densest Open Balls implies the existence of J.L-successful algorithms for Densest Hemispheres and Best Separating Homogeneous Hyper-planes. Towards this end we need notions of reductions between combinatorial optimization problems. The first definition, of a cost preserving polynomial reduction, is standard, whereas the second definition is tailored for our notion of J.L-margin success. Once this, somewhat technical, preliminary stage is over we shall describe our learning algorithms and prove their performance guarantees. Definition 3.2 Let II and II' be two maximization problems. A cost preserving polynomial reduction from II to II', written as II:S~~III' consists of the following components: • a polynomial time computable mapping which maps input instances of II to input instances of II', so that whenever I is mapped to I', opt( I') ~ opt( I). • for each I, a polynomial time computable mapping which maps each legal solutions (J' for I' to a legal solution (J for I having the same profit that (J' • The following result is evident: Lemma 3.3 If II:S~~III' and there exists a polynomial time c5-approximation algorithm for II', then there exists a polynomial time c5-approximation algorithm for II. Claim 3.4 BSH<CPl BSHH<cP1BSHem<cp lDHem. -po -po -po Proof Sketch: By adding a coordinate one can translate hyper-planes to homogeneous hyper-planes (i.e., hyper-planes that pass through the origin). To get from the homogeneous hyper-planes separating problem to the best separating hemisphere problem, one applies the standard scaling trick. To get from there to the densest hemisphere problem, one applies the standard reflection trick. • We are interested in J.L-relaxations of the above problems. We shall therefore introduce a slight modification of the definition of a cost-preserving reduction which makes it applicable to J.L-relaxed problems. Definition 3.5 Let II and II' be two geometric maximization problems, and J.L, J.L' > O. A cost preserving polynomial reduction from II[J.Ll to II' [J.L']' written as II[J.Ll:S~~III'[J.L'l, consists of the following components: • a polynomial time computable mapping which maps input instances of II to input instances of II', so that whenever I is mapped to I', opt", (I') ~ opt" (I). • for each I, a polynomial time computable mapping which maps each legal solutions (J' for I' to a legal solution (J for I having the same profit that (J' • The following result is evident: Lemma 3.6 If II[J.Ll:S~~III'[J.L'l and there exists a polynomial time J.L-margin successful algorithm for II, then there exists a polynomial time J.L'-margin successful algorithm for II' . To conclude our reduction of the Best Separating Hyper-plane problem to the Densest open Ball problem we need yet another step. Lemma 3.8 For p, > 0, let p,' = 1- J1- p,2 and p," = p,2/2. Then, DHem[p,]~~~IDOB[p,I]~~~IDOB[p,"] The proof is a bit technical and is deferred to the full version of this paper. Applying Theorem 3.1 and the above reductions, we therefore get: Theorem 3.9 For each (constant) p, > 0, there exists a p,-successful polynomial time algorithm AIL for the Best Separating Hyper-plane problem. Clearly, the same result holds for the problems BSHH, DHem and BSHem as well. Let us conclude by describing the learning algorithms for the BSH (or BSHH) problem that results from this analysis. We construct a family (AkhEN of polynomial time algorithms. Given a labeled input sample S, the algorithm Ak exhaustively searches through all subsets of S of size ~ k. For each such subset, it computes a hyper-plane that separates the positive from the negative points of the subset with maximum margin (if a separating hyperplane exists). The algorithm then computes the number of points in S that each of these hyper-planes classifies correctly, and outputs the one that maximizes this number. In [2] we prove that our Densest Open Ball algorithm is p,-successful for p, = 1/~ (when applied to all k-size subsamples). Applying Lemma 3.8, we may conclude for problem BSH that, for every k, Ak is (4/(k-1))1/4-successful. In other words: in order to be p,-successful, we must apply algorithm Ak for k = 1 + 14/ p,4l. 4 NP-Hardness Results We conclude this extended abstract by proving some NP-hardness results that complement rather tightly the positive results of the previous section. We shall base our hardness reductions on two known results. Theorem 4.1 [Hiistad, [4]] Assuming P;6NP, for any a < 1/22, there is no polynomial time a-approximation algorithm for MAX-E2-SAT. Theorem 4.2 [Ben-David, Eiron and Long, [1]] Assuming P;6NP, for any a < 3/418, there is no polynomial time a-approximation algorithm for BSH. Applying Claim 3.4 we readily get: Corollary 4.3 Assuming P;6NP, for any a < 3/418, there is no polynomial time a-approximation algorithm for BSHH, BSHem, or DHem. So far we discussed p,-relaxations only for a value of p, that was fixed regardless of the input dimension. All the above discussion extends naturally to the case of dimension-dependent margin parameter. Let p, denote a sequence (P,l, ... , P,n, . . . ). For a problem TI, its p,-relaxation refers to the problem obtained by considering the margin value P,n for inputs of dimension n. A main tool for proving hardness is the notion of p-Iegal input instances. An n-dimensional input sample S is called p-Iegal if the maximal profit on S can be achieved by a hypothesis h* that satisfies profit(h* IS) = profit(h* IS, ILn). Note that the p-relaxation of a problem is NPhard, if the problem restricted to p-Iegal input instances is NP-hard. Using a special type of reduction, that due to space constrains we cannot elaborate here, we can show that Theorem 4.1 implies the following: Theorem 4.4 1. Assuming P;6NP, there is no polynomial time 1/198approximation for BSH even when only 1/v'36n-legal input instances are allowed. 2. Assuming P;6NP, there is no polynomial time 1/198-approximation for BSHH even when only 1/ y'45(n + I)-legal input instances are allowed. Using the standard cost preserving reduction chain from BSHH via BSHem to DHem, and noting that these reductions are obviously margin-preserving, we get the following: Corollary 4.5 Let S be one of the problems BSHH, BSHem, or DHem, and let p be given by ILn = 1/ y'45(n + 1). Unless P=NP, there exists no polynomial time 1/198-approximation for S[p]. In particular, the p-relaxations of these problems are NP-hard. Since the 1/ y'45(n + I)-relaxation of the Densest Hemisphere Problem is NP-hard, applying Lemma 3.8 we get immediately Corollary 4.6 The 45(~+1) -relaxation of the Densest Ball Problem is NP-hard. Finally note that Corollaries 4.4, 4.5 and 4.6 rule out the existence of "strong schemes" (AI') with running time of AI' being also polynomial in 1/1.£. References [1] Shai Ben-David, Nadav Eiron, and Philip Long. On the difficulty of approximately maximizing agreements. Proceedings of the Thirteenth Annual Conference on Computational Learning Theory (COLT 2000), 266-274. [2] Shai Ben-David, Nadav Eiron, and Hans Ulrich Simon. The computational complexity of densest region detection. Proceedings of the Thirteenth Annual Conference on Computational Learning Theory (COLT 2000), 255-265. [3] Shai Ben-David, Nadav Eiron, and Hans Ulrich Simon. Non-embedability in Euclidean Half-Spaces. Technion TR, 2000. [4] Johan Hastad. Some optimal inapproximability results. In Proceedings of the 29th Annual Symposium on Theory of Computing, pages 1- 10, 1997.
2000
33
1,832
Gaussianization Scott Shaobing Chen Renaissance Technologies East Setauket, NY 11733 schen@rentec.com Ramesh A. Gopinath IBM TJ. Watson Research Center Yorktown Heights, NY 10598 rameshg@us.ibm.com Abstract High dimensional data modeling is difficult mainly because the so-called "curse of dimensionality". We propose a technique called "Gaussianization" for high dimensional density estimation, which alleviates the curse of dimensionality by exploiting the independence structures in the data. Gaussianization is motivated from recent developments in the statistics literature: projection pursuit, independent component analysis and Gaussian mixture models with semi-tied covariances. We propose an iterative Gaussianization procedure which converges weakly: at each iteration, the data is first transformed to the least dependent coordinates and then each coordinate is marginally Gaussianized by univariate techniques. Gaussianization offers density estimation sharper than traditional kernel methods and radial basis function methods. Gaussianization can be viewed as efficient solution of nonlinear independent component analysis and high dimensional projection pursuit. 1 Introduction Density Estimation is a fundamental problem in statistics. In the statistics literature, the univariate problem is well-understood and well-studied. Techniques such as (variable) kernel methods, radial basis function methods, Gaussian mixture models, etc, can be applied successfully to obtain univariate density estimates. However, the high dimensional problem is very challenging, mainly due to the problem of the so-called "curse of dimensionality". In high dimensional space, data samples are often sparsely distributed: it requires very large neighborhoods to achieve sufficient counts, or the number of samples has to grows exponentially according to the dimensions in order to achieve sufficient coverage of the sampling space. As a result, direct extension of univariate techniques can be highly biased, because they are neighborhood-based. In this paper, we attempt to overcome the curse of dimensionality by exploiting independence structures in the data. We advocate the notion that Independence lifts the curse of dimensionality! Indeed, if the dimensions are independent, then there is no curse of dimensionality since the high dimensional problem can be reduced to univariate problems along each dimension. For natural data sets which do not have independent dimensions, we would like to construct transforms such that after the transformation, the dimensions become independent. We propose a technique called "Gaussianization" which finds and exploits independence structures in the data for high dimensional density estimation. For a random variable X EnD, we define its Gaussianization transform to be an invertible and differential transform T(X) such that the transformed variable T(X) follows the standard Gaussian distribution: T(X) '" N(O, I) It is clear that density estimates can be derived from Gaussianization transforms. We propose an iterative procedure which converges weakly in probability: at each iteration, the data is first transformed to the least dependent coordinates and then each coordinate is marginally Gaussianized by univariate techniques which are based on univariate density estimation. At each iteration, the coordinates become less dependent in terms of the mutual information, and the transformed data samples become more Gaussian in terms of the Kullback-Leibler divergence. In fact, at each iteration, as far as the data is linearly transformed to less dependent coordinates, the convergence result still holds. Our convergence proof of Gaussianization is highly related to Huber's convergence proof of projection pursuit [4]. Algorithmically, each Gaussianization iteration amounts to performing the linear independent component analysis. Since the assumption of linear independent component analysis may not be valid, the resulting linear transform does not necessarily make the coordinate independent, however, it does make the coordinates as independent as possible. Therefore the engine of our algorithm is the linear independent component analysis. We propose an efficient EM algorithm which jointly estimates the linear transform and the marginal univariate Gaussianization transform at each iteration. Our parametrization is identical to the independence factor analysis proposed by Attias (1999) [1]. However, we apply the variational method in the M-step, as in the semi-tied covariance algorithm proposed for Gaussian mixture models by Gales (1999) [3]. 2 Existence of Gaussianization We first show the existence of Gaussianization transforms. Denote ¢(.) the probability density function of the standard normal N(O, I); denote ¢(', /-£,};.) the probability density function of N(/-£,};.) ; denote <1>(-) the cumulative distribution function (CDF) of the standard normal. 2.1 Univariate Gaussianization Univariate Gaussianization exists uniquely and can be derived from univariate density estimation. Let X E n 1 be the univariate variable. We assume that the density function of X is strictly positive and differentiable. Let F(·) be the cumulative distribution function of X. Then T(·) is a Gaussianization transform if and only if it satisfies the following partial differential equation: aT p(x) = ¢(T(x))1 ax I· It can be easily verified that the above partial differential equation has only two solutions: ±<I>-l(F(X)) '" N(O, 1) (1) In practice, the CDF F(·) is not available; it has to be estimated from the training data. We choose to approximate it by Gaussian mixture models: p(x) = 2:[=1 7ri¢(X, /-£i, O'l); equivalently we assume the CDF F(x) = 2:[=1 7ri<l>(X~:,) where the parameters {7ri' /-£i, O'd can be estimated via maximum likelihood using the standard EM algorithm. Therefore we can parameterize the Gaussianization transform as I X-. T(X) = <1>-1(2: 7ri <I> ( ~)) i=l O'i (2) In practice there is an issue of model selection: we suggest to use model selection techniques such as the Bayesian information criterion [6] to determine the number of Gaussians I. Throughout the paper, we shall assume that univariate density estimation and univariate Gaussianization can be solved by univariate Gaussian mixture models. 2.2 High Dimensional Gaussianization However, the existence of high dimensional Gaussianization is non-trivial. We present here a theoretical construction. For simplicity, we consider the two dimensional case. Let X = (X I, X 2) T be the random variable. Gaussianization can be achieved in two steps. We first marginally Gaussianize the first coordinate X I and fix the second coordinate X 2 unchanged; the transformed variable will have the following density P(XI,X2) =P(XI)P(X2Ixt) = ¢(xt)p(x2Ixt) . We then marginally Gaussian each conditional density p(·IXI) for each Xl. Notice that the marginal Gaussianization is different for different Xl : TX1 (X2 ) = CP-I(F.l xl(X2 )), Once all the conditional densities are marginally Gaussianized, we achieve joint Gaussianization p(XI,X2) = p(XI)p(X2IxI) = ¢(Xt}¢(X2) . The existence of high dimensional Gaussianization can be proved by similar construction. The above construction, however, is not practical since the marginal Gaussianization of the conditional densities P(X2 = x21XI = xt) requires estimation of the conditional densities given all Xl , which is impossible with finite samples. In the following sections, we shall develop an iterative Gaussianization algorithm that is practical and also can be proved to converge weakly. High-dimensional Gaussianization is unique up to any invertible transforms which preserve the measure on RP induced by the standard Gaussian distribution. Examples of such transforms are orthogonal linear transforms and certain nontrivial Nonlinear transforms. 3 Gaussianization with Linear leA Assumption Let (Xl,' . . , XN) be the i.i.d. samples from the random variable X E RP. We assume that there exist a linear transform A DxD such that the transformed variable Y = (YI, ... , YD)T = AX has independent components: p(YI, ... , YD) = p(yt} ... p(YD)' In this case, Gaussianization is reduced to linear ICA: we can first find the linear transformation A by linear independent component analysis, and then Gaussianize each individual dimension of Y by univariate Gaussianization. We parametrize the marginal Gaussianization by univariate Gaussian mixtures (2). This amounts to model the coordinates of the transformed variable by univariate Gaussian mixtures: p(Yd) = l:[~l 7r d,i¢(Yd, f.l,d,i, a~,i)' We would like to jointly optimize both the linear transform A and the marginal Gaussianization parameters (7r, f.l" a) via maximum likelihood. In fact, this is the same parametrization as in Attias (1999) [1]. We point out that modeling the coordinates after the linear transform as non-Gaussian distributions, for which we assume univariate Gaussian mixtures are adequate, leads to ICA while as modeling them as single Gaussians leads to PCA. The joint estimation of the parameters can be computed via the EM algorithm. The auxiliary function which has to be maximized in the M-step has the following form: N D Id 1 ( )2 Q(A,7r,f.l"a) = Nlogldet(A)I+ L LLWn,d,dlog7rd,i-210g27ra~,iYn,d2~td,i 1 n=l d=l i=l d,t where (Wn,d,i) are the posterior counts computed at the E-step. It can be easily shown that the priors (7r d,i can be easily updated and the means (f..Ld,i can be entirely determined by the linear transform A. However, updating the linear transform A and the variances (lTd,i) does not have closed form solution and has to be solved iteratively by numerical methods. Attias (1999) [1] proposed to optimize Q via gradient descent: at each iteration, one fixes the linear transform and compute the Gaussian mixture parameters, then fixes the Gaussian ntixture parameters and update the linear transform via gradient descent using the so-called natural gradient. We propose an iterative algorithm as in Gales (1999) [3] for the M-step which does not involve gradient descent and the nuisance and instability caused by of the step size parameter. At each iteration, we fix the linear transform A and update the variances (lTd,i); we then fix (lTd,i) and update each row of A with all the other rows of A fixed: updating each row amounts to solving a system of linear equations. Our iterative scheme guarantees that the auxiliary function Q to be increased at every iteration. Notice that each iteration in our M-step updates the rows of the linear matrix A by solving D linear equations. Although our iterative scheme may be slightly more expensive per iteration than standard numerical optintization techniques such as Attias' algorithm, in practice it converges after very few iterations, as observed in Gales (1999) [3]. In contrast the numerical optintization scheme may take an order of magnitude more iterations. In fact, in our experiments, our algorithm converges much faster than Attias's algorithm. Furthermore, our algorithm is stable since each iteration is guaranteed to increase the likelihood. The M-step in both Attias' algorithm and our algorithm can be implemented efficiently by storing and accessing the sufficient statistics. Typically in our M-steps, most of the improvement on the likelihood comes in the first few iterations. Therefore we can stop each M-step after, say one iteration of updating the parameters; even though the auxiliary function is not optimized, but it is guaranteed to improve. Therefore we obtained the socalled generalized EM algorithm. Attias (1999) [1] reported faster convergence of the generalized EM algorithm than the standard EM algorithm. 4 Iterative Gaussianization In this section we develop an iterative algorithm which Gaussianizes arbitrary random variables. At each iteration, the data is first transformed to the least dependent coordinates and then each coordinate is marginally Gaussianized by univariate techniques which are based on univariate density estimation. We shall show that transfornting the data into the least dependent coordinates can be achieved by linear independent component analysis. We also prove the weak convergence result. We define the negentropy 1 of a random variable X = (Xl,"', X D) T as the KullbackLeibler divergence between X and the standard Gaussian distribution. We define the marginal negentropy to be JM(X) = Ef=l J(Xd). One can show that the negentropy can be decomposed as the sum of the marginal negentropy and the mutual information: J(X) = JM(X) + I(X). Gaussianization is equivalent to finding an invertible transform TO such that the negentropy ofthe transformed variable vanishes: J(T(X)) = O. For arbitrary random variable X E R D, we propose the following iterative Gaussianization algorithm. Let X(O) = X. At each iteration, (A) Linearly transform the data: y(k) = Ax(k). 1 We are abusing the terminology slightly: normally the negentropy of a random variable is defined to be the Kullback-Leibler distance between itself and the Gaussian variable with the same mean and covariance. (B) Nonlinearly transform the data by marginal Gaussianization: X(k+1) = w (y(k)) 'Tr,J.1.,u where the marginal Gaussianization w 1T, It, 0.(-), which approximates the ideal marginal Gaussianization w (.), can be derived from univariate Gaussian mixtures (2): The parameters are chosen by minimizing the negentropy of the transformed variable X(k+1): (..4, 7T, it, 0-) = min J(w 1T,It,.,.(AX)). A,1r,f-t,u (3) Thus, after each iteration, the transformed variable becomes as close as possible to the standard Gaussian in the Kullback-Leibler distance. First, the problem of minizing the negentropy (3) is equivalent to the maximum likelihood problem for Gaussianization with linear ICA assumption in section 3, and thus can be solved by the same efficient EM algorithm. Second, since the data X(k) might not satisfy the linear ICA assumption, the optimal linear transform might not transform X(k) into independent coordinates. However, it does transform X (k) into the least dependent coordinates, since J(X(k+1)) = JM(w(AX(k))) + I(w(AX(k))) = I(AX(k)) . Further more, if the linear transform A is constrained to be orthogonal, then finding the least dependent coordinates is equivalent to finding the marginally most non-Gaussian coordinates, since J(X(k)) = J(AX(k)) = JM(AX(k)) + I(AX(k)) (notice that the negentropy is invariant under orthogonal transforms). Therefore our iterative algorithm can be viewed as follows. At each iteration, the data is linearly transformed to the least dependent coordinates and then each coordinate is marginally Gaussianized. In practice, after the first iteration, the algorithm finds linear transforms which are almost orthogonal. Therefore one can also view practically that at each iteration, the data is linearly transformed to the most marginally non-Gaussian coordinates and then each coordinate is marginally Gaussianized. For the sake of simplicity, we assume that we can achieve perfect marginal Gaussianization w(·) by w1T,It,.,.O), which is derived from univariate Gaussian mixtures. In fact, when the number of Gaussians goes to infinity and the number of samples goes to infinity, one can show that lim W 1T ,It,.,. = W. Thus it suffices to analyze the ideal iterative Gaussianization X(k) = W(AX(k)) where A = argmin J(W(AX(k))) = argmin I(AX(k)) . Following Huber's argument [4], we can show that X(k) -t N(O, I) in the sense of weak convergence, i.e. the density function of X(k) converges pointwise to the density function of standard normal. Original Data -1 -~2L---:O -----' iteratIon 3 _4L-------' -4-2 0 24 iteratIon 6 ' •. -2 •. ~:. :!'::-. 0".' .' IteratIon 1 :~ -2~ -4 -4-2024 IteratIon 4 Iteration 2 -~4L-_-::-2 ----:-0 -----:-2----'4 Iteration 5 :~:). -.: ... :ltJ·· :,.£.~· .. f 0 :" !to.: . o 0" 0 '-; . " •• -2 :: :-:... ~ .. : -2:· • ; • .;..=: -4 -4 -4-2 0 24 -4-2 0 24 IteratIon 7 Iteration 8 4~ 4~ ... : 0.0 ........ l: 2 •• • ". 2 ..... .r, ... :_: ~.:-4 -4 -4-2 0 24 -4-2 0 24 Gausslanlzatlon DensIty EstImatIon GaussIan MIxture DensIty EstImatIon 0.2 0.4 0.6 O.S 0.2 0.4 0.6 O.S Figure 1: Iterative Gaussianization on a synthetic circular data set We point out that out iterative algorithm can be relaxed as follows. At each iteration, the data can linearly transformed into coordinates which are less dependent, instead of into coordinates which are the least dependent: I(X(k) - I(AkX(k)) ~ E[I(X(k) i~t I(AX(k))] where the constant E > O. We can show that this relaxed algorithm still converges weakly. 5 Examples We demonstrate the process of our iterative Gaussianization algorithm through a very difficult two dimensional synthetic data set. The true underlying variable is circularly distributed: in the polar coordinate system, the angle is uniformly distributed; the radius follows a mixture of four non-overlapping Gaussians. We drew 1000 i.i.d. samples from this distribution. We ran 8 iterations to Gaussianize the data set. Figure 4 displays the transformed data set at each iteration. Clearly we see the transformed data gradually becomes standard Gaussian. Let X(O) = X ; assume that the iterative Gaussianization procedure converges after K iterations, i.e. X(K) '" N(O, I). Since the transforms at each iteration are invertible, we can then compute Jacobian and obtain density estimation for X. The Jacobian can be computed rapidly due to the chain rule. Figure 4 compares the Gaussianization density estimate (8 iterations) and Gaussian mixture density estimate (40 Gaussians). Clearly we see that the Gaussianization density estimate recovers the four circular structure; however, the Gaussian mixture estimate lacks resolution. 6 Discussion Gaussianization is closely connected with the exploratory projection pursuit algorithm proposed by Friedman (1987) [2]. In fact we argue that our iterative Gaussianization procedure can easily constrained as an efficient parametric solution of high dimensional projection pursuit. Assume that we are interested in [-dimensional projections where 1 ~ [ ~ D. If we constrain that at each iteration the linear transform has to be orthogonal, and only the first [ coordinates of the transformed variable are marginally Gaussianized, then the iterative Gaussianization algorithm achieves [ dimensional projection pursuit. The bottleneck of Friedman's high dimensional projection pursuit is to find the jointly most non-Gaussian projection and to jointly Gaussianize that projection. In contrast, our algorithm finds the most marginally non-Gaussian projection and marginally Gaussianize that projection; it can be computed by an efficient EM algorithm. We argue that Gaussianization density estimation indeed alleviates the problem of the curse of dimensionality. At each iteration, the effect of the curse of dimensionality is solely on finding a linear transform such that the transformed coordinates are less dependent, which is a relatively much easier problem than the original problem of high dimensional density estimation itself; after the linear transform, the marginal Gaussianization can be derived from univariate density estimation, which has nothing to do with the curse of dimensionality. Hwang 1994 [5] performed extensive comparative study among the following three popular density estimates: one dim projection pursuit density estimates (a special case of our iterative Gaussianization algorithm), adaptive kernel density estimates and radial basis function density estimates; he concluded that projection pursuit density estimates outperform in most data set. We are currently experimenting with application of Gaussianization density estimation in automatic speech and speaker recognition. References [1] H. Attias, "Independent factor analysis", Neural Computation, vol. 11, pp. 803-851, 1999. [2] J.H. Friedman, "Exploratory projection pursuit", J. American Statistical Association, vol. 82, pp. 249-266, 1987. [3] MJ.F. Gales, "Semi-tied covariance matrices for hidden Markov Models", IEEE Transaction Speech and Audio Processing, vol. 7, pp. 272-281, 1999. [4] PJ. Huber, "Projection pursuit", Annals of Statistics, vol. 13, pp 435-525, 1985. [5] J. Hwang, S. Lay and A. Lippman, "Nonparametric multivariate density estimation: a comparative study", IEEE Transaction Signal Processing, vol. 42, pp 2795-2810, 1994. [6] G. Schwarz, "Estimating the dimension of a model", Annals of Statistics, vol. 6, pp 461-464,1978.
2000
34
1,833
Error-correcting Codes on a Bethe-like Lattice Renato Vicente David Saad The Neural Computing Research Group Aston University, Birmingham, B4 7ET, United Kingdom {vicenter,saadd}@aston.ac.uk Yoshiyuki Kabashima Department of Computational Intelligence and Systems Science Tokyo Institute of Technology, Yokohama 2268502, Japan kaba@dis.titech.ac.jp Abstract We analyze Gallager codes by employing a simple mean-field approximation that distorts the model geometry and preserves important interactions between sites. The method naturally recovers the probability propagation decoding algorithm as an extremization of a proper free-energy. We find a thermodynamic phase transition that coincides with information theoretical upper-bounds and explain the practical code performance in terms of the free-energy landscape. 1 Introduction In the last years increasing interest has been devoted to the application of mean-field techniques to inference problems. There are many different ways of building mean-field theories. One can make a perturbative expansion around a tractable model [1,2], or assume a tractable structure and variationally determine the model parameters [3]. Error-correcting codes (ECC) are particularly interesting examples of inference problems in loopy intractable graphs [4]. Recently the focus has been directed to the state-of-the art high performance turbo codes [5] and to Gallager and MN codes [6,7]. Statistical physics has been applied to the analysis of ECCs as an alternative to information theory methods yielding some new interesting directions and suggesting new high-performance codes [8]. Sourlas was the first to relate error-correcting codes to spin glass models [9], showing that the Random-energy Model [10] can be thought of as an ideal code capable of saturating Shannon's bound at vanishing code rates. This work was extended recently to the case of finite code rates [11] and has been further developed for analyzing MN codes of various structures [12]. All of the analyzes mentioned above as well as the recent turbo codes analysis [13] relied on the replica approach under the assumption of replica symmetry. To date, the only model that can be analyzed exactly is the REM that corresponds to an impractical coding scheme of a vanishing code rate. Here we present a statistical physics treatment of non-structured Gallager codes by employing a mean-field approximation based on the use of a generalized tree structure (Bethe lattice [14]) known as Husimi cactus that is exactly solvable. The model parameters are just assumed to be those of the model with cycles. In this framework the probability propagation decoding algorithm (PP) emerges naturally providing an alternative view to the relationship between PP decoding and mean-field approximations already observed in [15]. Moreover, this approach has the advantage of being a slightly more controlled and easier to understand than replica calculations. This paper is organized as follows: in the next section we present unstructured Gallager codes and the statistical physics framework to analyze them, in section 3 we make use of the lattice geometry to solve the model exactly. In section 4 we analyze the typical code performance. We summarize the results in section 5. 2 Gallager codes: statistical physics formulation We will concentrate here on a simple communication model whereby messages are represented by binary vectors and are communicated through a Binary Symmetric Channel (BSC) where uncorrelated bit flips appear with probability /. A Gallager code is defined by a binary matrix A = [CI I C 2 ], concatenating two very sparse matrices known to both sender and receiver, with C 2 (of dimensionality (M - N) x (M - N) being invertiblethe matrix C I is of dimensionality (M - N) x N. Encoding refers to the production of an M dimensional binary code word t E {O, l}M (M > N) from the original message e E {O,l}N by t = GTe (mod 2), where all operations are performed in the field {a, I} and are indicated by (mod 2). The generator matrix is G = [1 I C2I C I ] (mod 2), where 1 is the N x N identity matrix, implying that AGT (mod 2) = ° and that the first N bits oft are set to the message e. In regular Gallager codes the number of non-zero elements in each row of A is chosen to be exactly K . The number of elements per column is then C = (1 - R)K, where the code rate is R = N I M (for unbiased messages). The encoded vector t is then corrupted by noise represented by the vector, E {O, l}M with components independently drawn from P( () = (1- J)8( () + /8(( - 1). The received vector takes the form r = GTe +, (mod 2). Decoding is carried out by multiplying the received message by the matrix A to produce the syndrome vector z = Ar = A, (mod 2) from which an estimate T for the noise vector can be produced. An estimate for the original message is then obtained as the first N bits of r + T (mod 2). The Bayes optimal estimator (also known as marginal posterior maximizer, MPM) for the noise is defined as Tj = argmaxr . P(Tj I z) , where Tj E {a, I}. 1 The performance of this estimator can be measured by the probability of bit error Pb = 1 - 11M ~~1 8[Tj; (j], where 8[;] is Kronecker's delta. Knowing the matrices C 2 and C I , the syndrome vector z and the noise level/it is possible to apply Bayes' theorem and compute the posterior probability 1 P(r I z) = ZX [z = Ar(mod 2)] P(r), (1) where X[X] is an indicator function providing 1 if X is true and ° otherwise. To compute the MPM one has to compute the marginal posterior Ph I z) = ~i#j P(r I z), which in general requires O(2M) operations, thus becoming impractical for long messages. To solve this problem one can use the sparseness of A to design algorithms that require O(M) operations to perform the same task. One of these methods is the probability propagation algorithm (PP), also known as belief propagation or sum-product algorithm [16]. The connection to statistical physics becomes clear when the field {a, I} is replaced by Ising spins {± I} and mod 2 sums by products [9]. The syndrome vector acquires the form of a multi-spin coupling Jp, = TIjE.c(p,) (j where j = 1,· .. ,M and f..L = 1,·· . , (M - N). Figure 1: Husimi cactus with K = 3 and connectivity C = 4. The K indices of nonzero elements in the row f.L of a matrix A, which is not necessarily a concatenation of two separate matrices (therefore, defining an unstructured Gallager code), are given by C(f.L) = {it,'" ,jK}, and in a column l are given by M(l) = {f.Ll'···' f.Lc}. The posterior (1) can be written as the Gibbs distribution [12]: P{3 (T 1.1) = -Zl lim exp [-,81l'Y (Tj .1) 1 1'--+00 (2) -, Mf (.1fA II Tj - 1) -F t Tj . fA=l jE£.(fA) j=l The external field corresponds to the prior probability over the noise and has the form F = atanh(l- 2J). Note that the Hamiltonian depends on a hyper-parameter that has to be set as , -t 00 for optimal decoding. The disorder is trivial and can be gauged as .1fA f-t 1 by using Tj f-t Tj (j. The resulting Hamiltonian is a multi-spin ferromagnet with finite connectivity in a random field hj = F(j. The decoding process corresponds to finding local magnetizations at temperature,8 = 1, mj = (Tj) (3=1 and calculating estimates as Tj = sgn(mj). In the {± 1 } representation the probability of bit error, acquires the form 11M Pb = 2' 2M L(j sgn(mj), j=l connecting the code performance with the computation of local magnetizations. 3 Bethe-like Lattice calculation 3.1 Generalized Bethe lattice: the "usimi cactus (3) A Husimi cactus with connectivity C is generated starting with a polygon of K vertices with one Ising spin in each vertex (generation 0). All spins in a polygon interact through a single coupling .1fA and one of them is called the base spin. In figure 1 we show the first step in the construction of a Husimi cactus, in a generic step the base spins of the n - 1 generation polygons, numbering (C -l)(K -1), are attached to K -1 vertices ofa generation n polygon. This process is iterated until a maximum generation nmax is reached, the graph is then completed by attaching C uncorrelated branches of nmax generations at their base spins. In that way each spin inside the graph is connected to exactly C polygons. The local magnetization at the centre mj can be obtained by fixing boundary (initial) conditions in the O-th generation and iterating recursion equations until generation nmax is reached. Carrying out the calculation in the thermodynamic limit corresponds to having nmax "" In M generations and M -t 00. The Hamiltonian of the model has the form (2) where C(f.L) denotes the polygon f.L of the lattice. Due to the tree-like structure, local quantities far from the boundary can be calculated recursively by specifying boundary conditions. The typical decoding performance can therefore be computed exactly without resorting to replica calculations [17]. 3.2 Recursion relations: probability propagation We adopt the approach presented in [18] where recursion relations for the probability distribution Pl-'k(Tk) of the base spin of the polygon J-L is connected to (C - I)(K - I) distributions Pvj (Tj), with v E M (j) \ J-L (all polygons linked to j but J-L) of polygons in the previous generation: Pl-'k(Tk) = ~ Tr{Tj} exp [(3 (.1I-'Tk II Tj -I) + FTk] II II Pvjh), jE'c(I-')\k vEM(j)\l-'jE'c(I-')\k (4) where the trace is over the spins Tj such that j E C(J-L) \ k. The effective field Xvj on a base spin j due to neighbors in polygon v can be written as : exp (-2x .) = e2F Pvj( -) (5) VJ Pvj (+)' Combining (4) and (5) one finds the recursion relation: ~ Trh} exp [-(3.11-' ITjE'c(I-')\k Tj + EjE'c(I-')\k(F + EVEMU)\I-' XVj)Tj] exp(-2xl-'k)=------~~------------------------------------~ Trh} exp [+(3.11-' ITjE'c(I-')\k Tj + EjE£(I-')\k(F + EVEMU)\I-' XVj)Tj] (6) By computing the traces and taking (3 -+ 00 one obtains: XI-'k = atanh [.11-' II tanh(F + L XVj)] jE'c(I-')\k VEMU)\I-' (7) The effective local magnetization due to interactions with the nearest neighbors in one branch is given by ml-'j = tanh (x I-'j). The effective local field on a base spin j of a polygon J-L due to C - 1 branches in the previous generation and due to the external field is XI-'j = F + EVEMU)\I-' Xvj; the effective local magnetization is, therefore, ml-'j = tanh(xl-'j). Equation (7) can then be rewritten in terms ofml-'j and ml-'j and the PP equations [7,15,16] can be recovered: ml-'k = tanh (F + L atanh (mVk)) vEMU)\1-' ml-'k = .11-' II ml-'j jE'c(I-')\k (8) Once the magnetizations on the boundary (O-th generation) are assigned, the local magnetization mj in the central site is determined by iterating (8) and computing: mj = tanh (F + L atanh (mVj)) (9) vEMU) 3.3 Probability propagation as extremization of a free-energy The equations (8) describing PP decoding represent extrema of the following free-energy: M-N M-N .1'( {ml-'k' ml-'d) = L L In(1 + ml-'iml-'i) - L In(1 + .11-' II ml-'i) (10) 1-'=1 iE'c 1-'=1 iE'c , , , , , o. , , (b) 0.8 , /\ , , ::2: 0.6 0.6 , , , , I:> a:: • , , 0> V 0.4 0.4 • 0.2 0.2 • 00 0.1 0.2 0.3 0.4 00 0.1 0.2 0.3 0.4 0.5 f Figure 2: (a) Mean normalized overlap between the actual noise vector C and decoded noise T for K = 4 and C = 3 (therefore R = 1/4). Theoretical values (D), experimental averages over 20 runs for code word lengths M = 5000 (e) and M = 100 (full line). (b) Transitions for K = 6. Shannon's bound (dashed line), information theory upper bound (full line ) and thermodynamic transition obtained numerically (0). Theoretical (0) and experimental (+, M = 5000 averaged over 20 runs) PP decoding transitions are also shown. In both figures, symbols are chosen larger than the error bars. tin reF II (1 + mlJ.j) + e-F II (1- mlJ.j)] j=l IJ.EM(j) IJ.EM(j) The iteration of the maps (8) is actually one out of many different methods of finding extrema of this free-energy (not necessarily stable) . This observation opens an alternative way for analyzing the performance of a decoding algorithm by studying the landscape (10). 4 Typical performance 4.1 Macroscopic description The typical macroscopic states of the system during decoding can be described by histograms of the variables mlJ.k and mlJ.k averaged over all possible realizations of the noise vector C. By applying the gauge transformation:flJ. r-+ 1 and Tj r-+ Tj(j, assigning the probability distributions Po (x) to boundary fields and averaging over random local fields F( one obtains from (7) the recursion relation in the space of probability distributions P(x): = (11) where Pn(x) is the distribution of effective fields at the n-th generation due to the previous generations and external fields, in the thermodynamic limit the distribution far from the boundary will be Poo(x) (generation n -+ (0). The local field distribution at the central site is computed by replacing C - 1 by C in (11), taking into account C polygons in the generation just before the central site, and inserting the distribution P 00 (x) . Equations (11) are identical to those obtained by the replica symmetric theory as in [12]. By setting initial (boundary) conditions Po(x) and numerically iterating (11), for C ~ 3 one can find, up to some noise level ls, a single stable fixed point at infinite fields, corresponding to a totally aligned state (successful decoding). At ls a bifurcation occurs and two other fixed points appear, stable and unstable, the former corresponding to a misaligned state (decoding failure). This situation is identical to that one observed in [12]. In terms of the free-energy (10), below ls the landscape is dominated by the aligned state that is the global minimum. Above ls a sub-optimal state, corresponding to an exponentially large number of spurious local minima of the Hamiltonian (2), appears and convergence to the totally aligned state becomes a difficult task. At some critical noise level the totally aligned state loses the status of global minimum and the thermodynamic transition occurs. The practical PP decoding is performed by setting initial conditions as ml-'j = 1 - 21, corresponding to the prior probabilities and iterating (8), until stationarity or a maximum number of iterations is attained. The estimate for the noise vector is then produced by computing Tj = sign(mj). At each decoding step the system can be described by histograms of the variables (8), this is equivalent to iterating (11) (a similar idea was presented in [7]). Below ls the process always converges to the successful decoding state, above ls it converges to the successful decoding only if the initial conditions are fine tuned, in general the process converges to the failure state. In Fig.2a we show the theoretical mean overlap between actual noise C and the estimate T as a function of the noise levell, as well as results obtained with PP decoding. Information theory provides an upper bound for the maximum attainable code rate by equalizing the maximal information contents of the syndrome vector z and of the noise estimate T [7]. The thermodynamic phase transition obtained by finding the stable fixed points of (11) and their free-energies interestingly coincides with this upper bound within the precision of the numerical calculation. Note that the performance predicted by thermodynamics is not practical as it requires O(2M) operations for an exhaustive search for the global minimum of the free-energy. In Fig.2b we show the thermodynamic transition for K = 6 and compare with the upper bound, Shannon's bound and the theoretical ls values. 4.2 Tree-like approximation and the thermodynamic limit The geometrical structure of a Gallager code defined by the matrix A can be represented by a bipartite graph (Tanner graph) [16] with bit and check nodes. Each column j of A represents a bit node and each row J.L represents a check node, AI-'j = 1 means that there is an edge linking bit j to check J.L. It is possible to show that for a random ensemble of regular codes, the probability of completing a cycle after walking l edges starting from an arbitrary node is upper bounded by P[l; K, C, M] :-:; l2 Kl 1M. It implies that for very large M only cycles of at least order In M survive. In the thermodynamic limit M -+ 00 the probability P [l; K, C, M] -+ a for any finite l and the bulk of the system is effectively treelike. By mapping each check node to a polygon with K bit nodes as vertices, one can map a Tanner graph into a Husimi lattice that is effectively a tree for any number of generations of order less than In M. It is experimentally observed that the number of iterations of (8) required for convergence does not scale with the system size, therefore, it is expected that the interior of a tree-like lattice approximates a Gallager code with increasing accuracy as the system size increases. Fig.2a shows that the approximation is fairly good even for sizes as small as M = 100. 5 Conclusions To summarize, we solved exactly, without resorting to the replica method, a system representing a Gallager code on a Husimi cactus. The results obtained are in agreement with the replica symmetric calculation and with numerical experiments carried out in systems of moderate size. The framework can be easily extended to MN and similar codes. New insights on the decoding process are obtained by looking at a proper free-energy landscape. We believe that methods of statistical physics are complimentary to those used in the statistical inference community and can enhance our understanding of general graphical models. Acknowledgments We acknowledge support from EPSRC (GRlN00562), The Royal Society (RV,DS) and from the JSPS RFTF program (YK). References [1] Plefka, T., (1982) Convergence condition of the TAP equation for the infinite-ranged Ising spin glass model. Journal of Physics A 15, 1971-1978. [2] Tanaka, T., Information geometry of mean field approximation. to appear in Neural Computation [3] Saul, L.K. & , M.L Jordan (1996) Exploiting tractable substructures in intractable. In Touretzky, D. S. , M. C. Mozer and M. E. Hasselmo (eds.), Advances in Neural Information Processing Systems 8, pp. 486-492. Cambridge, MA: MIT Press. [4] Frey, B.J. & D.J.C. MacKay (1998) A revolution: belief propagation in graphs with cycles. In Jordan, M.L, M. J. Kearns and S.A. Solla (eds.), Advances in Neural Information Processing Systems 10, pp. 479-485 . Cambridge, MA: MIT Press. [5] Berrou, C. & A. Glavieux (1996) Near optimum error correcting coding and decoding: Turbocodes, IEEE Transactions on Communications 44,1261-1271. [6] Gallager, R.G. (1963) Low-density parity-check codes, MIT Press, Cambridge, MA. [7] MacKay, D.J.C. (1999) Good error-correcting codes based on very sparse matrices, IEEE Transactions on Information Theory 45, 399-431. [8] Kanter, 1. & D. Saad (2000) Finite-size effects and error-free communication in Gaussian channels, Journal of Physics A 33, 1675-1681. [9] Sourlas, N. (1989) Spin-glass models as error-correcting codes, Nature 339, 693-695. [10] Derrida, B. (1981) Random-energy model: an exactly solvable model of disordered systems, Physical Review B 24(5),2613-2626. [11] Vicente, R., D. Saad & Y. Kabashima (1999) Finite-connectivity systems as error-correcting codes, Physical Review E 60(5), 5352-5366. [12] Kabashima, Y., T. Murayama & D.Saad (2000) Typical performance of Gallager-type errorcorrecting codes, Physical Review Letters 84 (6), 1355-1358. [13] Montanari, A. & N. Sourlas (2000) The statistical mechanics of turbo codes, European Physical Journal B 18,107-119. [14] Sherrington, D. & K.Y.M. Wong (1987) Graph bipartitioning and the Bethe spin glass, Journal of Physics A 20, L 785-L791. [15] Kabashima, Y. & D. Saad (1998) Belief propagation vs. TAP for decoding corrupted messages, Europhysics Letters 44 (5), 668-674. [16] Kschischang, F.R. & B.J. Frey, (1998) Iterative decoding of compound codes by probability probagation in graphical models, IEEE Journal on Selected Areas in Comm. 16 (2), 153-159. [17] Gujrati, P.D. (1995) Bethe or Bethe-like lattice calculations are more reliable than conventional mean-field calculations, Physical Review Letters 74 (5), 809-812. [18] Rieger, H. & T.R. Kirkpatrick (1992) Disordered p-spin interaction models on Husirni trees, Physical Review B 45 (17), 9772-9777.
2000
35
1,834
Homeostasis in a Silicon Integrate and Fire Neuron Shih-Chii LiD Institute for Neuroinformatics, ETHIVNIZ Winterthurstrasse 190, CH-8057 Zurich Switzerland shih@ini.phys.ethz.ch Bradley A. Minch School of Electrical and Computer Engineering Cornell University Ithaca, NY 14853-5401, U.S.A. minch@ee.comell.edu Abstract In this work, we explore homeostasis in a silicon integrate-and-fire neuron. The neuron adapts its firing rate over long time periods on the order of seconds or minutes so that it returns to its spontaneous firing rate after a lasting perturbation. Homeostasis is implemented via two schemes. One scheme looks at the presynaptic activity and adapts the synaptic weight depending on the presynaptic spiking rate. The second scheme adapts the synaptic "threshold" depending on the neuron's activity. The threshold is lowered if the neuron's activity decreases over a long time and is increased for prolonged increase in postsynaptic activity. Both these mechanisms for adaptation use floating-gate technology. The results shown here are measured from a chip fabricated in a 2-J.lm CMOS process. 1 Introduction We explored long-time constant adaptation mechanisms in a simple integrate-and-fire silicon neuron. Many researchers have postulated constant adaptation mechanisms which, for example, preserve the firing rate of the neuron over long time invervals (Liu et al. 1998) or use the presynaptic spiking statistics to adapt the spiking rate of the neuron so that the distribution of this spiking rate is uniformly distributed (Stemmler and Koch 1999). Homeostasis is observed in in-vitro recordings (Desai et al. 1999) where if the K or Na conductances are perturbed by adding antagonists, the cell returns to its original spiking rate in a couple of days. This work differs from previous work that explore the adaptation of the firing threshold and the gain of the neuron through the regulation of Hodgkin-Huxley like conductances (Shin and Koch 1999) and regulation of the neuron to perturbation in the conductances (Simoni and DeWeerth 1999). Our neuron circuit is a simple integrate-and-fire neuron and lepse Vm Vrl Irefr I C2 VOl JUL -:-:Spike output, Vo >---'--- - - -.. Pbase /~/ v,IV)::ectm : lepse -=-j : \ I \ I \ I / _/ . ..... "", Figure 1: Schematic of neuron circuit with long time constant mechanisms for presynaptic adaptation. our adaptation mechanisms have time constants of seconds to minutes. We also describe adaptation of the synaptic weight to presynaptic spiking rates. This presynaptic adaptation models the contrast gain control curves of cortical simple cells (Ohzawa et al. 1985). We fabricated two different circuits in a 2-pm CMOS process. One circuit implements presynaptic adaptation and the other circuit implements postsynaptic adaptation. The long time constant adaptation mechanisms use tunnelling and injection mechanisms to remove charge from and to add charge onto a floating gate (Diorio et al. 1999). We added these mechanisms to a simple integrate-and-fire neuron circuit (Mead 1989). This circuit (shown in Figure 1) takes an input current, lepsc, which charges up the membrane, Vm . When the membrane exceeds a threshold, the output of the neuron, Vo, spikes. The spiking rate of the neuron, fo is determined by the input current, lepsc , that is, fo = m lepsc where 1 . m = (Cl+C2jVdd IS a constant. 2 Adaptation mechanisms in silicon neuron circuit In order to permit continuous operation with only positive polarity bias voltages, we use two distinct mechanisms to modify the floating-gate charges in our neuron circuits. We use Fowler-Nordheim tunneling through high-quality gate oxide to remove electrons from the floating gates (Lenzlinger and Snow 1969). Here, we apply a large voltage across the oxide, which reduces the width of the Si-Si02 energy barrier to such an extent that electrons are likely to tunnel through the barrier. The tunneling current is given approximately by I -/'o e-vo/voz tunt , where Vox = V'tun Vfg is the voltage across the tunneling oxide and lot and Vo are measurable device parameters. For the 400-A oxides that are typical of a 2-l-£m CMOS process, a typical value of Vo is 1000 V and an oxide voltage of about 30 V is required to obtain an appreciable tunneling current. We use subthreshold channel hot-electron injection in an nMOS transistor (Diorio, Minch, and Hasler 1999) to add electrons to the floating gates. In this process, electrons in the channel of the nMOS transistor accelerate in the high electric field that exists in the depletion region near the drain, gaining enough energy to surmount the Si-Si02 energy barrier (about 3.2 eV). To facilitate the hot-electron injection process, we locally increase the substrate doping density of the nMOS transistor using the p-base layer that is normally used to form the base of a vertical npn bipolar transistor. The p-base substrate implant simultaneously increases the electric field at the drain end of the channel and increases the nMOS transistor's threshold voltage from 0.8 V to about 6 V, permitting subthreshold operation at gate voltages that permit the collection of the injected electrons by the floating gate. The hot-electron injection current is given approximately by 1- . 111 e<Pdc/Vinj on) - " s , where Is is the source current, <Pdc is the drain-to-channel voltage, and 1/ and Vinj are measurable device parameters. The value of Vinj is a bias dependent injection parameter and typically ranges from 60 mV to 0.1 V. 3 Presynaptic adaptation The first mechanism adapts the synaptic efficacy to the presynaptic firing rate over long time constants. The circuit for this adaptation mechanism is shown in Figure 1. The synaptic current is generated by a series of two transistors; one is driven by the presynaptic input and the other by the floating-gate voltage. The floating-gate voltage stores the synaptic efficacy of the synapse. A discrete amount of charge is integrated on a diode capacitor every time there is a presynaptic spike. The charge that is dumped onto the capacitor depends on the input frequency and the synaptic weight. The excitatory postsynaptic current to the membrane of the neuron depends also on the gain of the current-mirror. The tunneling mechanism which is controlled by vtun is continuously on so the synaptic efficacy slowly decreases over time. The injection mechanism is turned on only when there is a presynaptic spike. This presynaptic adaptation can model the contrast gain control curves of cortical simple cells. 3.1 Steady-state analysis In steady-state, the tunneling current, Itun, is equal to the average injection current, Iinj and they are as follows: Vo Itun = Iote Vtun v/ gO (1) IopbekV/gO/UT T6 Iinj (e QT - I)AQT fi (2) where A is the gain of the current mirror integrator, QT = CdUT /k, VfgO is the steady-state floating-gate voltage, fi is the presynaptic rate and T/j is the pulse width of the presynaptic pulse. From Equations 1 and 2, we can solve for VfgO and thus determine the synaptic current, Isyn: kV[gO 1 Isyn = Iopbe UT = Im/(hT/j)"P. In this equation, 1m is a preconstant and fJ is approximately 1. The steady-state input current is given by Iepsc = IsynT/jAfi ~ ImA, thus it is independent of the presynaptic input frequency. 3.2 Transient analysis With a transient change in the presynaptic frequency, h, the initial postsynaptic frequency is given by: (3) 160 ~ N 140 Transient gain ::c: '-" ~ ;>-. 120 u ~ (\) 100 ::s c::r (\) 80 ~ u .,...., 60 ...... ~ ~ 40 ;>-. '" ...... '" 20 0 0.. 50 100 150 200 250 300 350 Presynaptic frequency (Hz) Figure 2: Adaptation curves of synaptic efficacy to presynaptic frequencies using long time constant adaptation mechanisms. As derived from Equation 3, we see that the transient change in the neuron's spiking rate is dependent on the contrast of the input spiking rate, dfd Iidfo = m * 1m * A * dfd Ii = foCdfd Ii) '* dfo/dfi = fo/ Ii (4) Hence, the transient gain of the neuron is equal to the ratio of the postsynaptic spiking rate to the presynaptic input rate and it decreases with the input rate. 3.3 Experimental results We measured the transient and steady-state spiking rates of the neuron around four different steady-state presynaptic rates of 100Hz, 150Hz, 200Hz, and 250Hz. In these measurements, the drain of the pbase injection transistor was set at 4V and the tunnelling voltage was set at 35.3V. For each steady-state presynaptic rate, we presented step increases and decreases in the presynaptic rate of 15Hz, 30Hz, 45Hz, and 60Hz. The instantaneous postsynaptic rate is plotted along one the four steep curves in Figure 2. After every change in the presynaptic rate, we returned the presynaptic rate to its steady-state value before we presented the next change in presynaptic rate. The transient gain of the curves decreases for higher input spiking rates. This is predicted by Equation 4. We also recorded the dynamics of the adaptation mechanisms by measuring the spiking rate of the neuron when the presynaptic frequency was decreased at time (t=O) from 350 Hz to 300 Hz as shown in Figure 3. The system adapts over a time constant of minutes back to the initial output frequency. These data show that the synaptic efficacy adapted to a higher weight value over time. The time constant of adaptation can be increased by either increasing the tunnelling voltage or the pbase injector's drain voltage, Vd. 70 ~ 60 N ~ '-' 50 ~ITnr ~ u ,rn ~ $:I 40 (]) ::l 0'" (]) 30 ~ ...... ::l & 20 ::l 0 10 00 100 200 300 400 500 600 700 800 900 Time (sec) Figure 3: Temporal adaptation of spiking rate of neuron to a decrease in the presynaptic frequency from 350Hz to 300Hz. The smooth line is an exponential fit to the data curve. 4 Postsynaptic adaptation In the second mechanism, the neuron's spiking rate determines the synaptic "threshold". The schematic of this adaptation circuitry is shown in Figure 4. The floating-gate pbase transistor provides a quiescent input to the neuron so that the neuron fires at a quiescent rate. The tunneling mechanism is always turned on so the neuron's spiking rate increases in time if the neuron does not spike. However the injection mechanism turns on when the neuron spikes. The time constant of these mechanisms is in terms of seconds to minutes. The increase in the floating-gate voltage is equivalent to a decrease in the synaptic threshold. If the neuron's activity is high, the injection mechanism turns on thus decreasing the floatinggate voltage and the input current to the neuron. These two opposing mechanisms ensure that the cell will remain at a constant activity under steady-state conditions. In other words, the threshold of the neuron is modulated by its output spiking rate. The threshold of the neuron continuously decreases and each output spike increases the threshold. 4.1 Steady-state analysis Similar equations as in Section 3.1 can be used to solve for V/ gD , thus leading us to the following expression for the steady-state input current, linD: kV/ gO linD = lopbe----rJ'T = Im/(foT/j)"Y where 1m is a preconstant and 'Y is close to 1. 4.2 Transient analysis When a positive step voltage is applied to v;,,,,, the step change, ~V, is coupled into the floating gate. The initial transient current is: !.~ .--_ ----1 SPike O"'~~~J I t 1.11.11 I:~ /-Membrane voltage, Vm I ...,... I Adaptation I -l circuitry ~ V -lI \ tun ~~ _________ -_· _____ V~f9~ ______ \\ ____ ~// Figure 4: Schematic of neuron circuit with long time constant mechanisms for postsynaptic adaptation. and the initial increase in the postsynaptic firing rate is k~V fo + dfo = foe""fYT. If we assume that the step input, Vin = 10g(li) (where fi is the firing rate of the presynaptic neuron), then the change in the floating-gate voltage is described by ~ V = dfd Ii- We then solve for dfo, dfo k~V k dfi = e UT - 1 ~ --. fo Ur Ii (5) Equation 5 shows that the transient change in the neuron's spiking rate is proportional to the input contrast in the firing rate. With time, the floating-gate voltage adapts back to the steady-state condition, so the spiking rate returns to fo. 4.3 Experimental results In these experiments, we set the tunneling voltage, vtun to 28V, and the injection voltage to 6.6y' We coupled a step decrease of O.2V into the floating-gate voltage and then measured the output frequency of the neuron over a period of 10 minutes. The output of this experiment is shown in Figure 5. The frequency dropped from about 19Hz to 13Hz but the circuit adapted after this initial perturbation and the spiking rate of the neuron returned to about 19Hz over 26min. A similar experiment is performed but this time a step increase of O.2V was coupled into the floating gate node (shown in Figure 5). Initially, the neuron's rate increased from 20Hz to 28Hz but over a long period of minutes, the firing rate returned to 20Hz. 5 Conclusion In this work, we show how long-time constant adaptation mechanisms can be added to a silicon integrate-and-fire neuron in a normal CMOS process. These homeostatic mechanisms can be combined with short time constant synaptic depressing synapses on the same neuron to provide a range of adapting mechanisms. The presynaptic adaptation mechanism can also account for the contrast gain curves of cortical simple cells. ,-., N ::c: '-' ;>., u ~ II) 5< II) c.t:: ....... & =s 0 30 2 200 400 600 800 1000 1200 1400 1600 Time (sec) Figure 5: Response of silicon neuron to an increase and a decrease of a step input of 0.2V. The curve shows that the adaptation time constant is in the order of about 10 min. Acknowledgments We thank Rodney Douglas for supporting this work, the MOSIS foundation for fabricating this circuit, and Tobias Delbrilck for proofreading this document. This work was supported in part by the Swiss National Foundation Research SPP grant and the U.S. Office of Naval Research. References Desai, N., L. Rutherford, and G. Turrigiano (1999, lun). Plasticity in the intrinsic excitability of cortical pyramidal neurons. Nature Neuroscience 2(6), 515-520. Diorio, c., B. A. Minch, and P. Hasler (1999). Floating-gate MOS learning systems. Proceedings of the International Symposium on the Future of Intellectual Integrated Electronics (ISF/IE), 515-524. Lenzlinger, M. and E. H. Snow (1969). Fowler-Nordheim tunneling into thermally grown Si02 . Journal of Applied Physics 40, 278-283. Liu, Z., 1. Golowasch, E. Marder, and L. Abbott (1998). A model neuron with activitydependent conductances regulated by multiple calcium sensors. Journal of Neuroscience 18(7), 2309-2320. Mead, C. (1989). Analog VLSI and neural systems. Reading, MA: Addison-Wesley. Ohzawa, I., G. Sclar, and R. Freeman (1985). Contrast gain control in the eat's visual system. Journal ofNeurophys. 54, 651-667. Shin, I. and C. Koch (1999). Dynamic range and sensitivity adaptation in a silicon spiking neuron. IEEE Trans. on Neural Networks 10(5), 1232-1238. Simoni, M. and S. DeWeerth (1999). Adaptation in an aVLSI model of a neuron. IEEE CAS II-Analog and Digital Signal Processing 46(7), 967-970. Stemmler, M. and C. Koch (1999). How voltage-dependent conductances can adapt to maximize the information encoded by neuronal firing rate. Nature Neuroscience 2(6),521-527.
2000
36
1,835
Incorporating Second-Order Functional Knowledge for Better Option Pricing Charles Dugas, Yoshua Bengio, Fran~ois Belisle, Claude Nadeau:Rene Garcia CIRANO, Montreal, Qc, Canada H3A 2A5 {dugas ,bengi oy,beli s lfr ,na deau c}@i ro .umontr e a l. ca garc i ar@ci rano .qc . ca Abstract Incorporating prior knowledge of a particular task into the architecture of a learning algorithm can greatly improve generalization performance. We study here a case where we know that the function to be learned is non-decreasing in two of its arguments and convex in one of them. For this purpose we propose a class of functions similar to multi-layer neural networks but (1) that has those properties, (2) is a universal approximator of continuous functions with these and other properties. We apply this new class of functions to the task of modeling the price of call options. Experiments show improvements on regressing the price of call options using the new types of function classes that incorporate the a priori constraints. 1 Introduction Incorporating a priori knowledge of a particular task into a learning algorithm helps reducing the necessary complexity of the learner and generally improves performance, if the incorporated knowledge is relevant to the task and really corresponds to the generating process of the data. In this paper we consider prior knowledge on the positivity of some first and second derivatives of the function to be learned. In particular such constraints have applications to modeling the price of European stock options. Based on the Black-Scholes formula, the price of a call stock option is monotonically increasing in both the "moneyness" and time to maturity of the option, and it is convex in the "moneyness". Section 3 better explains these terms and stock options. For a function f(Xl, X2) of two real-valued arguments, this corresponds to the following properties: > 0, {Pf>o 8xr (1) The mathematical results of this paper (section 2) are the following: first we introduce a class of one-argument functions (similar to neural networks) that is positive, nondecreasing and convex in its argument, and we show that this class of functions is a universal approximator for positive functions with positive first and second derivatives. Second, in the main theorem, we extend this result to functions of two or more arguments, with some having the convexity property and all having positive first derivative. This result rests on additional properties on cross-derivatives, which we illustrate below for the case of two ·C.N. is now with Health Canada at Cl aude-.Nadeau@hc-sc . gc . c a arguments: (2) Comparative experiments on these new classes of functions were performed on stock option prices, showing some improvements when using these new classes rather than ordinary feedforward neural networks. The improvements appear to be non-stationary but the new class of functions shows the most stable behavior in predicting future prices. The detailed results are presented in section 5. 2 Theory Definition A class of functions :i from IRn to IR is a universal approximator for a class of functions F from IRn to IR if for any f E F, any compact domain D C IRn , and any positive E, one can find a j E :i with sUP"'ED If(x) - j(x)1 ~ EIt has already been shown that the class of artificial neural networks with one hidden layer H N = {f(x) = bo + 2: Wih(bi + 2: VijXj)} (3) i=l j e.g. with a sigmoid activation function h(s) = l+~-" are universal approximators of continuous functions [1, 2, 5]. The number of hidden units H of the neural network is a hyper-parameter that controls the accuracy of the approximation and it should be chosen to balance the trade-off between accuracy (bias of the class of functions) and variance (due to the finite sample used to estimate the parameters of the model), see also [6]. Since h is monotonically increasing, it is easy to force the first derivatives with respect to x to be positive by forcing the weights to be positive, for example with the exponential function: H N+ = {f(x) = bo + 2: eWi h(bi + 2: eVii Xj)} (4) i=l j because h'(s) = h(s)(1- h(s)) > o. Since the sigmoid h has a positive first derivative, its primitive, which we call softplus, is convex: ((s) = log(1 + eS ) (5) i.e., d((s)/ds = h(s) = 1/(1 + C S ). The basic idea of the proposed class of functions N++ is to replace the sigmoid of a sum by a product of softplus or sigmoid functions over each of the dimensions (using the softplus over the convex dimensions and the sigmoid over the others): H e n cN++={f(x)=ebo+2:eWi(II((bij+eviiXj))( II h(bij+eViixj))} (6) i=l j=l j=c+l One can readily check that the first derivatives wrt Xj are positive, and that the second derivatives wrt Xj for j ~ c are positive. However, this class of functions has other properties. Let (it,··· ,jm) be a set of indices with 1 ~ ji ~ c (convex dimensions), and let (jt, ... , j~) be a set of indices c + 1 ~ j~ ~ n (the other dimensions), then am+v f a2m+v f aXjl ... aXj= aXj~ ... Xj~ ~ 0, aX]l . .. aX]= aXj~ ... Xj~ ~ 0 (7) Note that m or p can be 0, so as special cases we find that f is positive, and that it is monotonically increasing w.r.t. all its inputs, and convex w.r.t. the first c inputs. 2.1 Universality of cN++ over ~ Theorem Within the set F ++ of continuous functions from ~n to ~ whose first and second derivatives are non-negative (as specified by equation 7), the class cN++ is a universal approximator. Proof For lack of space we only show here a sketch of the proof, and only for the case n = 2 and c = 1 (one convex dimension and one other dimension), but the same principle allows to prove the more general case. Let f(x) E F++ be the function to approximate with a function 9 E IN++. To perform our approximation we will restrict 9 to the subset of IN++ where the sigmoid becomes a step function B(x) = [x>o and where the softplus becomes the positive part function x+ = max(O, x). Let D be the compact domain of interest and t: the desired approximation precision. We focus our attention on an axisaligned rectangle T with lower-left comer (ai, bl ) and upper right comer (a2' b2) such that it is the smallest such rectangle enclosing D and it can be partitionned into squares of length L forming a grid such that the value of f at neighboring grid points does not differ by more than t:. The number of square grids on the Xl axis is Nl and the number on the X2 axis is N2. The number of hidden units is H = (Nl + 1)(N2 + 1). Let Xij = (Xi, Xj) = (al + iL, bl + jL) be the grid points, with i = 0,1, ... , Nl , j = 0,1, ... , N2. Also, x = (Xl, X2). With k = i(N2 + 1) + j, we recursively build a series of functions gk(X) as follows: with increment for k = 1 to H and with initial approximation go = f(al, bl ). The final approximation is g(x) = gH(X), It is exact at every single point on the grid and within t: of the true function value anywhere within D. To prove this, we need to show that at every step of the recursive procedure, the necessary increment is nonnegative (since it must be equated with e Wk). First note that the value of 9 H (Xij) is strictly affected by the set of increments ~st for which s <= i and t <= j so that, j f(Xij) = gH(Xij) = L: L: ~st(i - s + 1)L s=Ot=o Isolating ~ij and doing some algebra, we get, ~ij = ~;1,Xl,X2gH(Xij)L2 where ~~i ,Xj,Xk is the third degree finite difference with respect to arguments Xi, Xj, Xk, i.e. ~~1, X l,XJ(Xl,X2) = (~~1,XJ(Xl,X2) ~~l,xJ(Xl - L,X2))/L, where similarly ~~l,xJ(Xl,X2) = (~xlf(Xl,X2) ~xJ(Xl,X2 - L))/L, and ~xlf(Xl,X2) = (f(Xl' X2) - f(Xl - L, X2))/ L. By the mean value theorem, the third degree finite difference is nonnegative if the corresponding third derivative is nonnegative everywhere over the finite interval which is obtained by constraint 7. Finally, the third degree finite difference being nonnegative, the corresponding increment is also nonnegative and this completes the proof. Corollary Within the set of positive continuous functions from ~ to ~ whose first and second derivatives are non-negative, the class IN++ is a universal approximator. 3 Estimating Call Option Prices An option is a contract between two parties that entitles the buyer to a claim at a future date T that depends on the future price, ST of an underlying asset whose price at time t is St. In this paper we consider the very common European call options, in which the value of the claim at maturity (time T) is max(O, ST - K), i.e. if the price is above the strike price K, then the seller of the option owes ST - K dollars to the buyer. In the no-arbitrage framework, the call function is believed to be a function of the actual market price of the security (St), the strike price (K), the remaining time to maturity (T = T - t), the risk free interest rate (r), and the volatility of the return (a). The challenge is to evaluate the value of the option prior to the expiration date before entering a transaction. The risk free interest rate (r) needs to be somehow extracted from the term structure and the volatility (a) needs to be forecasted, this latest task being a field of research in itself. We have [3] previously tried to feed in neural networks with estimates of the volatility using historical averages but so far, the gains remained insignificant. We therefore drop these two features and rely on the ones that can be observed: St, K, T. One more important result is that under mild conditions, the call option function is homogeneous of degree one with respect to the strike price and so our final approximation depends on two variables: the moneyness (M = Stl K) and the time to maturity (T). ctl K = f(M, T) (8) An economic theory yielding to the Black-Scholes formula suggest that f has the properties of (1), so we will evaluate the advantages brought by the function classes of the previous section. However, it is not clear whether the constraint on the cross derivatives that are incorporated in IN++ should or not be present in the true price function. It is known that the Black-Scholes formula does not adequately represent the market pricing of options, but it might still be a useful guide in designing a learning algorithm for option prices. 4 Experimental Setup As a reference model, we use a simple multi-layered perceptron with one hidden layer (eq. 3). We also compare our results with a recently proposed model [4] that closely resembles the Black-Scholes formula for option pricing (i.e. another way to incorporate possibly useful prior knowledge): nh yES a + M . L i31,i . h('Yi,o + 'Yi,l . M + 'Yi,2 . T) i=l nh + e-rr . L i32,i . hbi,3 + 'Yi,4 . M + 'Yi,5 . T). i=l (9) We evaluate two new architectures incorporating some or all of the constraints defined in equation 7. We used european call option data from 1988 to 1993. A total of 43518 transaction prices on european call options on the S&P500 index were used. In section 5, we report results on 1988 data. In each case, we used the first two quarters of 1988 as a training set (3434 examples), the third quarter as a validation set (1642 examples) for model selection and 4 to 20 quarters as a test sets (each with around 1500 examples) for final generalization error estimation. In tables 1 and 2, we present results for networks with unconstrained weights on the left-hand side, and weights constrained to positive and monotone functions through exponentiation of parameters on the right-hand side. For each model, the number of hidden units varies from one to nine. The mean squared error results reported were obtained as follows: first, we randomly sampled the parameter space 1000 times. We picked the best (lowest training error) model and trained it up to 1000 more times. Repeating this procedure 10 times, we selected and averaged the performance of the best of these 10 models (those with training error no more than 10% worse than the best out of 10). In figure 1, we present tests of the same models on each quarter up to and including 1993 (20 additional test sets) in order to assess the persistence (conversely, the degradation through time) of the trained models. 5 Forecasting Results Simple Multi-Layered Perceptrons Mean Squared Error Results on Call Option Pricing (x 10-4 ) Units Unconstrained weights Constrained weights Train Valid Test! Test2 Train Valid Test! Test2 1 2.38 1.92 2.73 6.06 2.67 2.32 3.02 3.60 2 1.68 1.76 1.51 5.70 2.63 2.14 3.08 3.81 3 1.40 1.39 1.27 27.31 2.63 2.15 3.07 3.79 4 1.42 1.44 1.25 27.32 2.65 2.24 3.05 3.70 5 1.40 1.38 1.27 30.56 2.67 2.29 3.03 3.64 6 1.41 1.43 1.24 33.12 2.63 2.14 3.08 3.81 7 1.41 1.41 1.26 33.49 2.65 2.23 3.05 3.71 8 1.41 1.43 1.24 39.72 2.63 2.14 3.07 3.80 9 1.40 1.41 1.24 38.07 2.66 2.27 3.04 3.67 Black-Scholes Similar Networks Mean Squared Error Results on Call Option Pricing (x 10-4 ) Units Unconstrained weights Constrained weights Train Valid Test! Test2 Train Valid Test! Test2 1 1.54 1.58 1.40 4.70 2.49 2.17 2.78 3.61 2 1.42 1.42 1.27 24.53 1.90 1.71 2.05 3.19 3 1.40 1.41 1.24 30.83 1.88 1.73 2.00 3.72 4 1.40 1.39 1.27 31.43 1.85 1.70 1.96 3.15 5 1.40 1.40 1.25 30.82 1.87 1.70 2.01 3.51 6 1.41 1.42 1.25 35.77 1.89 1.70 2.04 3.19 7 1.40 1.40 1.25 35.97 1.87 1.72 1.98 3.12 8 1.40 1.40 1.25 34.68 1.86 1.69 1.98 3.25 9 1.42 1.43 1.26 32.65 1.92 1.73 2.08 3.17 Table 1: Left: the parameters are free to take on negative values. Right: parameters are constrained through exponentiation so that the resulting function is both positive and monotone increasing everywhere w.r.t. to both inputs. Top: regular feedforward artificial neural networks. Bottom: neural networks with an architecture resembling the Black-Scholes formula as defined in equation 9. The number of units varies from 1 to 9 for each network architecture. The first two quarters of 1988 were used for training, the third of 1988 for validation and the fourth of 1988 for testing. The first quarter of 1989 was used as a second test set to assess the persistence of the models through time (figure 1). In bold: test results for models with best validation results. As can be seen in tables 1 and 2, the positivity constraints through exponentiation of the weights allow the networks to avoid overfitting. The training errors are generally slightly lower for the networks with unconstrained weights, the validation errors are similar but final test errors are disastrous for unconstrained networks, compared to the constrained ones. This "liftoff' pattern when looking at training, validation and testing errors has triggered our attention towards the analysis of the evolution of the test error through time. The unconstrained networks obtain better training, validation and testing (test 1) results but fail in Products of SoftPlus and Sigmoid Functions Mean Squared Error Results on Call Option Pricing (x 10 - 4 ) Units Unconstrained weights Constrained weights Train Valid Testl Test2 Train Valid Test1 Test2 1 2.27 2.15 2.35 3.27 2.28 2.14 2.37 3.51 2 1.61 1.58 1.58 14.24 2.28 2.13 2.37 3.48 3 1.51 1.53 1.38 18.16 2.28 2.13 2.36 3.48 4 1.46 1.51 1.29 20.14 1.84 1.54 1.97 4.19 5 1.57 1.57 1.46 10.03 1.83 1.56 1.95 4.18 6 1.51 1.53 1.35 22.47 1.85 1.57 1.97 4.09 7 1.62 1.67 1.46 7.78 1.86 1.55 2.00 4.10 8 1.55 1.54 1.44 11.58 1.84 1.55 1.96 4.25 9 1.46 1.47 1.31 26.13 1.87 1.60 1.97 4.12 Sums of SoftPlus and Sigmoid functions Mean Squared Error Results on Call Option Pricing (x 10-4 ) Units Unconstrained weights Constrained weights Train Valid Testl Test2 Train Valid Test1 Test2 1 1.83 1.59 1.93 4.10 2.30 2.19 2.36 3.43 2 1.42 1.45 1.26 25.00 2.29 2.19 2.34 3.39 3 1.45 1.46 1.32 35.00 1.84 1.58 1.95 4.11 4 1.56 1.69 1.33 21.80 1.85 1.56 1.99 4.09 5 1.60 1.69 1.42 10.11 1.85 1.52 2.00 4.21 6 1.57 1.66 1.39 14.99 1.86 1.54 2.00 4.12 7 1.61 1.67 1.48 8.00 1.86 1.60 1.98 3.94 8 1.64 1.72 1.48 7.89 1.85 1.54 1.98 4.25 9 1.65 1.70 1.52 6.16 1.84 1.54 1.97 4.25 Table 2: Similar results as in table 1 but for two new architectures. Top: products of softplus along the convex axis with sigmoid along the monotone axis. Bottom: the softplus and sigmoid functions are summed instead of being multiplied. Top right: the fully constrained proposed architecture. the extra testing set (test 2). Constrained architectures seem more robust to changes in underlying econometric conditions. The constrained Black-Scholes similar model performs slightly better than other models on the second test set but then fails on latter quarters (figure 1). All in all, at the expense of slightly higher initial errors our proposed architecture allows us to forecast with increased stability much farther in the future. This is a very welcome property as new derivative products have a tendency to lock in values for much longer durations (up to 10 years) than traditional ones. 6 Conclusions Motivated by prior knowledge on the derivatives of the function that gives the price of European options, we have introduced new classes of functions similar to multi-layer neural networks that have those properties. We have shown one of these classes to be a universal approximator for functions having those properties, and we have shown that using this a priori knowledge can help in improving generalization performance. In particular, we have found that the models that incorporate this a priori knowledge generalize in a more stable way over time. . , 2 , 1 , 0 -' '-,J 5 10 15 20 Ouar1Ofusodas lest sel tom3rd01 1988 1041ho11993(llCI) 05 " " '1 .ii : 11 ." • • • • • , , . , . , , , , , . , . , , . , , '. " " " " ~ °O~-----'~--~'~O----~,~,----~ro~--~ Quartorusodasleslsel 1rom3rd01 1988104lho11993(Ulci) Figure 1: Out-of-sample results from the third quarter of 1988 to the fourth of 1993 (incl.) for models with best validation results. Left: unconstrained models: results for the BlackScholes similar network. Other unconstrained models exhibit similar swinging result patterns and levels of errors. Right: constrained models: the fully constrained proposed architecture (solid). The model with sums over dimensions obtains similar results. The regular neural network (dotted). The constrained Black-Scholes model obtains very poor results (dashed). References [1] G. Cybenko. Continuous valued neural networks with two hidden layers are sufficient. Technical report, Department of Computer Science, Tufts University, Medford, MA, 1988. [2] G. Cybenko. Approximation by superpositions of a sigmoidal function. 2:303-314, 1989. [3] C. Dugas, O. Bardou, and Y. Bengio. Analyses empiriques sur des transactions d'options. Technical Report 1176, Department d'informatique et de Recherche Operationnelle, Universite de Montreal, Montreal, Quebec, Canada, 2000. [4] R. Garcia and R. Gen~ay. Pricing and Hedging Derivative Securities with Neural Networks and a Homogeneity Hint. Technical Report 98s-35, CIRANO, Montreal, Quebec, Canada, 1998. [5] K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal approximators. 2:359-366,1989. [6] 1. Moody. Prediction risk and architecture selection for neural networks. In From Statistics to Neural Networks: Theory and Pattern Recognition Applications. Springer, 1994.
2000
37
1,836
Spike-Timing-Dependent Learning for Oscillatory Networks Silvia Scarp etta Dept. of Physics "E.R. Caianiello" Salerno University 84081 (SA) Italy and INFM, Sezione di Salerno Italy scarpetta@na. infn. it Zhaoping Li Gatsby Compo Neurosci. Unit University College, London, WCIN 3AR United Kingdom zhaoping@gatsby.ucl.ac.uk John Hertz Nordita 2100 Copenhagen 0, Denmark heriz@nordita.dk Abstract We apply to oscillatory networks a class of learning rules in which synaptic weights change proportional to pre- and post-synaptic activity, with a kernel A(r) measuring the effect for a postsynaptic spike a time r after the presynaptic one. The resulting synaptic matrices have an outer-product form in which the oscillating patterns are represented as complex vectors. In a simple model, the even part of A(r) enhances the resonant response to learned stimulus by reducing the effective damping, while the odd part determines the frequency of oscillation. We relate our model to the olfactory cortex and hippocampus and their presumed roles in forming associative memories and input representations. 1 Introduction Recent studies of synapses between pyramidal neocortical and hippocampal neurons [1, 2, 3, 4] have revealed that changes in synaptic efficacy can depend on the relative timing of pre- and postsynaptic spikes. Typically, a presynaptic spike followed by a postsynaptic one leads to an increase in efficacy (LTP), while the reverse temporal order leads to a decrease (LTD). The dependence of the change in synaptic efficacy on the difference r between the two spike times may be characterized by a kernel which we denote A(r) [4]. For hippocampal pyramidal neurons, the half-width of this kernel is around 20 ms. Many important neural structures, notably hippocampus and olfactory cortex, exhibit oscillatory activity in the 20-50 Hz range. Here the temporal variation of the neuronal firing can clearly affect the synaptic dynamics, and vice versa. In this paper we study a simple model for learning oscillatory patterns, based on the structure of the kernel A( r) and other known physiology of these areas. We will assume that these synaptic changes in long range lateral connections are driven by oscillatory, patterned input to a network that initially has only local synaptic connections. The result is an imprinting of the oscillatory patterns in the synapses, such that subsequent input of a similar pattern will evoke a strong resonant response. It can be viewed as a generalization to oscillatory networks with spike-timing-dependent learning of the standard scenario whereby stationary patterns are stored in Hopfield networks using the conventional Hebb rule. 2 Model The computational neurons of the model represent local populations of biological neurons that share common input. They follow the equations of motion [5] Ui = -aUi - (3?gv(Vi) + L J~gu(Uj) + Ii, (1) j Vi -avi +')'Pgu(Ui) + Lwggu(Uj). (2) #i Here Ui and Vi are membrane potentials for excitatory and inhibitory (formal) neuron i, a- 1 is their membrane time constant, and the sigmoidal functions gu( ) and gv( ) model the dependence of their outputs (interpreted as instantaneous firing rates) on their membrane potentials. The couplings (3? and ')'? are inhibitory-to-excitatory (resp. excitatory-to-inhibitory) connection strengths within local excitatory-inhibitory pairs, and for simplicity we take the external drive Ii(~ to act only on the excitatory units. We include nonlocal excitatory couplings Jij between excitatory units and wg from excitatory units to inhibitory ones. In this minimal model, we ignore long-range inhibitory couplings, appealing to the fact that real anatomical inhibitory connections are predominantly short-ranged. (In what follows, we will sometimes use bold and sans serif notation (e.g., u, J) for vectors and matrices, respectively.) The structure of the couplings is shown in Fig. 1A. The model is nonlinear, but here we will limit our treatment to an analysis of small oscillations around a stable fixed point {ii, v} determined by the DC part of the input. Performing the linearization and eliminating the inhibitory units [6, 5], we obtain ii + [2a - J]ti + [a2 + (3(')' + W) - aJ]u = (at + a)81. (3) Here u is now measured from the fixed point ii, 81 is the time-varying part of the input, and the elements of J and W are related to those of JO and WO by Wij = g~(Uj)wg and Jij = g~(Uj)J~. For simplicity, we have assumed that the effective local couplings (3i = g~(Vi)(3? and ')'i = g~(uih? are independent of i: (3i = (3, ')'i = ')'. With oscillatory inputs 81 = ee-iwt + c.c., the oscillatory pattern elements ~i = I~ile-i¢i are complex, reflecting possible phase differences across the units. We likewise separate the response u = u+ + u- (after the initial transients) into positive- and negative-frequency components u± (with u- = u+* and u± ex: e'fiwt). Since ti± = =t=iwu±, Eqn. (3) can be written [2a ± ~(a2 + (3')' - w2)] u± = M±u± + (1 ± ~) 81±, (4) a form that shows how the matrix M±(w) == J =t= !..((3W - aJ). (5) w describes the effective coupling between local oscillators. 2a is the intrinsic damping and J a 2 + (3')' the frequency of the individual oscillators. A II L 1 j CD J:, G) + ... --------~ + u. , , U l , , , , , , , , , , , , >, , , , , , , , , O /w:, '0 / B.1 B.2 Figure 1: A. The model: In addition to the local excitatory-inhibitory connections (vertical solid lines), there are nonlocallong-range connections (dashed lines) between excitatory units (Jij ) and from excitatory to inhibitory units (Wij ). External inputs are fed to the excitatory units. B: Activation function used in simulations for excitatory units (B.1) and inhibitory units (B.2). Crosses mark the equilibrium point (ii, v) of the system. 2.1 Learning phase We employ a generalized Hebb rule of the form c5Cij (t) ='T/ rT dtjOO dTYi(t+T)A(T)Xj(t) 10 -00 (6) for synaptic weight Cij, where Xj and Yi are the pre- and postsynaptic activities, measured relative to stationary levels at which no changes in synaptic strength occur. We consider a general kernel A(T), although experimentally A(T) > 0 « 0) for T > 0 « 0). We will apply the rule to both J and W in our linearized network, where the firing rates 9u(Ui) and 9v(Vi) vary linearly with Ui and Vi, so we will use Eqn. (6) with Xj = Uj and Yi = Ui or Vi (measured from the fixed point Vi), respectively. We assume oscillatory input c51 = eOe-iwot + c.c. during learning. In the brain structures we are modeling, cholinergic modulation makes the long-range connections ineffective during learning [7]. Thus we set J = W = 0 in Eqn. (3) and find ( +. )':0 -iwot u7- = Wo Ia '>i e = Uo~~e-iwot t 2awo + i(a2 + (3"1 - w5) t (7) and, from (at + a)vi = "lUi, (8) Using these in the learning rule (6) leads to (9) where A(w) = J~oodT A(T)e-iwT is the Fourier transform of A(T), Jo 27r'T/J IUol2 /wo, and 'T/J(W) are the respective learning rates. When the rates are tuned such that 'T/J = 'T/w"l(3/(a2 +w5) and when w = Wo, we have Mit = JoA(wo)~?~J*, a generalization of the outer-product learning rule to the complex patterns el-l from the Hopfield-Hebb form for real-valued patterns. For learning multiple patterns e, f.L = 1,2, ... , the learned weights are simply sums of contributions from individual patterns like Eqns. (9) with ~? replaced by ~r. 2.2 Recall phase We return to the single-pattern problem and study the simple case when 'fJJ 'fJw'"Y(3/(a2 + w~). Consider first an input pattern 81 = ee-iwt + c.c. that matches the stored pattern exactly (e = eO), but possibly oscillating at a different frequency. We then find, using Eqns. (9) in Eqn. (3), the (positive-frequency) response (w + ia)eoe-iwt u+ (10) - 2aw ~(w + wo)A'(wo) + i[a2 + (3'"Y ~(w + WO)AII(WO) - w2]· where A'(wo) == ReA(wo) and A"(WO) == ImA(wo). For strong response at w = Wo, we require (11) This means (1) the resonance frequency Wo is determined by A", (2) the effective damping 2a - JoA' should be small, and (3) deviation of w from Wo reduces the responses. It is instructive to consider the case where the width of the time window for synaptic change is small compared with the oscillation period. Then we can expand A(wo) in Wo: (12) In particular, A(T) = 8(T) gives ao = 1 and al = 0 and the conventional Hebbian learning [5]. Experimentally, al > 0 , implying a resonant frequency greater than the intrinsic local frequency, J a 2 + (3'"Y obtained in the absence of long-range coupling. If the drive e does not match the stored pattern (in phase and amplitude), the response will consist of two terms. The first has the form of Eqn. (10) but reduced in amplitude by an overlap factor eO* . e. (For convenience we use normalized pattern vectors.) The second term is proportional to the part of e orthogonal to the stored pattern. The J and W matrices do not act in this subspace, so the frequency dependence of this term is just that of uncoupled oscillators, i.e., Eqn. (10) with Jo set equal to zero. This response is always highly damped and therefore small. It is straightforward to extend this analysis to multiple imprinted patterns. The response consists of a sum of terms, one for each stored pattern. The term for each stored pattern is just like that just described in the single-stored-pattern case: it has one part for the input component parallel to the stored pattern and another part for the component orthogonal to the stored pattern. We note that, in this linear analysis, an input which overlaps several stored patterns will (if the imprinting and input frequencies match) evoke a resonant response which is a linear combination of the stored patterns. Thus, a network tuned to operate in a nearly linear regime is able to interpolate in forming its representation of the input. For categorical associative memory, on the other hand, a network has to work in the extreme nonlinear limit, responding with only the strongest stored pattern in an input mixture. As our network operates near the threshold for spontaneous oscillations, we expect that it should exhibit properties intermediate between 200 " ~ 1'i ~ 100 A 60 " 'C . .E 200 1'i ~ 100 B 00 0 00 • o o-eo* 02 04 06 0 8 Overlap C 90',-------------::=---,$ ,* * 00 " 00 .. / * 45 90 Input angle (degrees) Figure 2: Circles show non-linear simulation results, stars show the linear simulation results, while the dotted line show the analytical prediction for the linearized model. A. Importance of frequency match: amplitude of response of output units as a function of the frequency of the current input. The frequency of the imprinted pattern is 41 Hz. B.Importance of amplitude and phase mismatch: amplitude of response as a function of overlap between current input and imprinted pattern (i.e. I~o* . ~I), for different presented input patterns~. C: Input - output relationship when two orthogonal patterns e and e, have been imprinted at the same frequency w = 41Hz. The angle of input pattern with resrect to ~1 is shown as a function of the angle of the output pattern with respect to ~ ,for many different input patterns. these limits. We find that this is indeed the case in the simulations reported in the next section. From our analysis it turns out that the network behaves like a Hopfield-memory (separate basins, without interpolation capability) for patterns with different imprinting frequencies, but at the same time it is able to interpolate among patterns which share a common frequency. 3 Simulations Checking the validity of our linear approximation in the analysis, we performed numerical simulations of both the non-linear equations (1,2) and the linearized ones (3). We simulated the recall phase of a network consisting of 10 excitatory and 10 inhibitory cells. The connections Jij and Wij were calculated from Eqns. (9), where we used the approximations (12) for the kernel shape A(T). Parameters were set in such a way that the selective resonance was in the 40-Hz range. In non-linear simulations we used different piecewise linear activation functions for 9u() and 9v(), as shown in Fig.1B. We chose the parameters of the functions 9u () and 9v () so that the network equilibrium points Ui, ih were close to, but below, the high-gain region, i.e. at the points marked with crosses in Fig. lB. The results confirm that when the input pattern matches the imprinted one in frequency, amplitude and phase, the network responds with strong resonant oscillations. However, it does not resonate if the frequencies do not match, as shown in the frequency tuning curve in Fig. 2A. The behavior when the two frequencies are close to each other differs in the linear and nonlinear cases. However, in both cases a sharp selectivity in frequency is observed. The dependence on the overlap between the input and the stored pattern is shown in Fig. 2B. The non-linear case, indicated by circles, should be compared with the linear case, where the amplitude is always linear in the overlap. In the nonlinear case, the linearity holds roughly only for overlaps lower than about 0.4; for larger overlaps the amplification is as high as for the perfect match case. This means that input patterns with an overlap with the imprinted one greater than 0.4 lie within the basis of attraction of the W =WI e = el ,NS:C] _soC] 50 ,----------, o 200 400 200 400 ," 1--1 ," 5:0 ," S:c=J -soC] -50 -50 _soc=J o 200 400 0 200 0 200 400 0 200 400 ,.':~ ,. ~O ,.':8 ,. ~8 - 50 -SO - 50 - 50 o 200 400 0 200 400 0 200 400 0 200 400 Figure 3: Frequency selectivity: Response evoked on 3 of the 10 neurons. Oscillatory patterns ele-iwlt + c.c. and e2e-iw2tc.c. have been imprinted, with e l .1 e and WI = 41 Hz, W2 = 63 Hz. During the learning phases the parameter al of kernel was tuned appropriately, i.e. al = 0.1 when imprinting e l and al = 1.1 when imprinting e· imprinted pattern. The response elicited when two orthogonal patterns have been imprinted with the same frequency is shown in Fig. 2C. Let ele-iwot + c.c. and ee-iwot + c.c. denote the imprinted patterns, and ee-iwot + c.c. be the input to the trained network. In both linear and non-linear simulations the network responds vigorously(with highamplitude oscillations) to the drive if e is in the subspace spanned by the imprinted patterns, and fails to respond appreciably if e is orthogonal to that plane. When the input pattern e is in the plane spanned by the stored patterns, the resonant response u also lies in this plane. However, while in the linear case the output is proportional to the input, in agreement with the analytical analysis, in the nonlinear case there are preferred directions, in the stored pattern plane. The figure shows that, in case simulated here, there are three stable attractors: e, e, and the symmetric linear combination (el + e 2)/V2). Finally we performed linear simulations storing two orthogonal patterns ele-iwlt + c.c. and e2e-iw2t + c.c. with two different imprinting frequencies. Fig. 3 shows a good performance of the network in separating the basins of attraction in this case. The response to a linear combination of the two patterns, (ae + be)e-iw2t + c.c. is proportional to the part of the input whose imprinting frequency matches the current driving frequency. Linear combinations of the two imprinted patterns are not attractors if the two patterns do not share the same imprinting frequency. 4 Summary and Discussion We have presented a model of learning for memory or input representations in neural networks with input-driven oscillatory activity. The model structure is an abstraction of the hippocampus or the olfactory cortex. We propose a simple generalized Hebbian rule, using temporal-activity-dependent LTP and LTD, to encode both magnitudes and phases of oscillatory patterns into the synapses in the network. After learning, the model responds resonantly to inputs which have been learned (or, for networks which operate essentially linearly, to linear combinations of learned inputs), but negligibly to other input patterns. Encoding both amplitude and phase enhances computational capacity, for which the price is having to learn both the excitatory-to-excitatory and the excitatory-to-inhibitory connections. Our model puts contraints on the form of the learning kernal A(r) that should be experimenally observed, e.g., for small oscillation frequencies, it requires that the overall LTP dominates the overall LTD, but this requirement should be modified if the stored oscillations are of high frequencies. Plasticity in the excitatory-to-inhibitory connections (for which experimental evidence and investigation is still scarce) is required by our model for storing phase locked but unsynchronous oscillation patterns. As for the Hopfield model, we distinguish two functional phases: (1) the learning phase, in which the system is clamped dynamically to the external inputs and (2) the recall phase, in which the system dynamics is determined by both the external inputs and the internal interactions. A special property of our model in the linear regime is the following interpolation capability: under a given oscillation frequency, once the system has learned a set of representation states, all other states in the subspace spanned by the learned states can also evoke vigorous responses. Hippocampal place cells could employ such a representation. Each cell has a localised "place field", and the superposition of activity of several cells wth nearby place fields can represent continuously-varying position. The locality of the place fields also means that this representation is conservative (and thus robust), in the sense that interpolation does not extend beyond the spatial range of the experienced locations or to locations in between two learned but distant and disjoint spatial regions. Of course, this interpolation property is not always desirable. For instance, in categorical memory, one does not want inputs which are linear combinations of stored patterns to elicit responses which are also similar linear combinations. Suitable nonlinearity can (as we saw in the last section), enable the system to perform categorization: one way involved storing different patterns (or, by implication, different classes of patterns) at different frequencies. For instance, in a multimodal area, "place fields" might be stored at one oscillation frequency, and (say) odor memories at another. It seems likely to us that the brain may employ different kinds and degrees of nonlinearity in different areas or at different times to enhance the versatility of its computations. References [1] H Markram, J Lubke, M Frotscher, and B Sakmann, Science 275 213 (1997). [2] J C Magee and D Johnston, Science 275 209 (1997). [3] D Debanne, B H Gahwiler, and S M Thompson, J Physiol507 237 (1998) . [4] G Q Bi and M M Poo, J Neurosci 18 10464 (1998). [5] Z Li and J Hertz, Network: Computation in Neural Systems 11 83-102 (2000). [6] Z Li and J J Hopfield, Biol Cybern 61 379-92 (1989) . [7] M E Hasselmo, Neural Comp 5 32-44 (1993).
2000
38
1,837
On Reversing Jensen's Inequality Tony Jebara MIT Media Lab Cambridge, MA 02139 jebam@media.mit.edu Abstract Alex Pentland MIT Media Lab Cambridge, MA 02139 sandy@media.mit.edu Jensen's inequality is a powerful mathematical tool and one of the workhorses in statistical learning. Its applications therein include the EM algorithm, Bayesian estimation and Bayesian inference. Jensen computes simple lower bounds on otherwise intractable quantities such as products of sums and latent log-likelihoods. This simplification then permits operations like integration and maximization. Quite often (i.e. in discriminative learning) upper bounds are needed as well. We derive and prove an efficient analytic inequality that provides such variational upper bounds. This inequality holds for latent variable mixtures of exponential family distributions and thus spans a wide range of contemporary statistical models. We also discuss applications of the upper bounds including maximum conditional likelihood, large margin discriminative models and conditional Bayesian inference. Convergence, efficiency and prediction results are shown. 1 1 Introduction Statistical model estimation and inference often require the maximization, evaluation, and integration of complicated mathematical expressions. One approach for simplifying the computations is to find and manipulate variational upper and lower bounds instead of the expressions themselves. A prominent tool for computing such bounds is Jensen's inequality which subsumes many information-theoretic bounds (cf. Cover and Thomas 1996). In maximum likelihood (ML) estimation under incomplete data, Jensen is used to derive an iterative EM algorithm [2]. For graphical models, intractable inference and estimation is performed via variational bounds [7]. Bayesian integration also uses Jensen and EM-like bounds to compute integrals that are otherwise intractable [9]. Recently, however, the learning community has seen the proliferation of conditional or discriminative criteria. These include support vector machines, maximum entropy discrimination distributions [4], and discriminative HMMs [3]. These criteria allocate resources with the given task (classification or regression) in mind, yielding improved performance. In contrast, under canonical ML each density is trained separately to describe observations rather than optimize classification or regression. Therefore performance is compromised. 'This is the short version of the paper. Please download the long version with tighter bounds, detailed proofs, more results, important extensions and sample matlab code from: http://www.media.rnit.edu/ "-'jebara/bounds Computationally, what differentiates these criteria from ML is that they not only require Jensen-type lower bounds but may also utilize the corresponding upper bounds. The Jensen bounds only partially simplify their expressions and some intractabilities remain. For instance, latent distributions need to be bounded above and below in a discriminative setting [4] [3]. Metaphorically, discriminative learning requires lower bounds to cluster positive examples and upper bounds to repel away from negative ones. We derive these complementary upper bounds 2 which are useful for discriminative classification and regression. These bounds are structurally similar to Jensen bounds, allowing easy migration of ML techniques to discriminative settings. This paper is organized as follows: We introduce the probabilistic models we will use: mixtures of the exponential family. We then describe some estimation criteria on these models which are intractable. One simplification is to lower bound via Jensen's inequality or EM. The reverse upper bound is then derived. We show implementation and results of the bounds in applications (i.e. conditional maximum likelihood (CML». Finally, a strict algebraic proof is given to validate the reverse-bound. 2 The Exponential Family We restrict the reverse-Jensen bounds to mixtures of the exponential family (e-family). In practice this class of densities covers a very large portion of contemporary statistical models. Mixtures of the e-family include Gaussians Mixture Models, Multinomials, Poisson, Hidden Markov Models, Sigmoidal Belief Networks, Discrete Bayesian Networks, etc. [1] The e-family has the following form: p(Xle) = exp(A(X) + xTe - K(e)). E-Distribution A(X) Gaussian ~ XT x-If log(2n-) Multinomial o Here, K(e) is convex in e, a multi-dimensional parameter vector. Typically the data vector X is constrained to live in the gradient space of K, i.e. X E :eK(e). The e-family has special properties (i.e. conjugates, convexity, linearity, etc.) [1]. The reverse-Jensen bound also exploits these intrinsic properties. The table above lists example A and K functions for Gaussian and multinomial distributions. More generally, though, we will deal with mixtures of the e-family (where m represents the incomplete data?, i.e.: m m These latent probability distributions need to get maximized, integrated, marginalized, conditioned, etc. to solve various inference, prediction, and parameter estimation tasks. However, such manipulations can be difficult or intractable. 3 Conditional and Discriminative Criteria The combination of ML with EM and Jensen have indeed produced straightforward and monotonically convergent estimation procedures for mixtures of the e-family [2] [1] [7]. However, ML criteria are non-discriminative modeling techniques for estimating generative models. Consequently, they suffer when model assumptions are inaccurate. 2 A weaker bound for Gaussian mixture regression appears in [6]. Other reverse-bounds are in [8]. 3Note we use El to denote an aggregate model encompassing all individual Elm \1m. 12 12 10 0 10 0 go 'b 0 § ifJO 00 'b 0 § ifJo 8 0 0 ~ 8 00 ~ 0 Q'.o 0 • 0 6 C) 6 " * 8 0' , 0 • " ~ >Q x " , 4 0 4 0 00"0 6$0 O~ 00° 0 0°'<> 2 ° 2 ° ' ' ,?O ° '" 0 '" )( "§/ 0 .. 8 )( "*.l -2 -2 -5 0 5 10 15 20 25 -5 0 5 10 15 20 25 ML Classifier: 1=-8.0, Ie = -1.7 CML Classifier: 1=-54.7, Ie = 0.4 Figure 1: ML vs. CML (Thick Gaussians represent circles, thin ones represent x's). For visualization, observe the binary classification4 problem above. Here, our model incorrectly has 2 Gaussians (identity covariances) per class but the true data is generated from 8 Gaussians. Two solutions are shown, ML and CML. Note the values of joint loglikelihood I and conditional log-likelihood Ie. The ML solution performs as well as random chance guessing while CML classifies the data very well. Thus, CML, in estimating a conditional density, propagates the classification task into the estimation criterion. In such examples, we are given training examples Xi and corresponding binary labels Ci to classify with a latent variable e-family model (mixture of Gaussians). We use m to represent the latent missing variables. The corresponding objective functions log-likelihood I and conditional log-likelihood Ie are: I L , log L= p(m,e"X, 10) Ie = L, logL=p(m,e, IX,,0) = L , logL=p(m ,e"X, 10)-logL= L c p(m, e,X, 10) The classification and regression task can be even more powerfully exploited in the case of discriminative (or large-margin) estimation [4] [5]. Here, hard constraints are posed on a discriminant function £ (X IE», the ratio of each class' latent likelihood. Prediction of class labels is done via the sign of the function, c = sign£(X IE». £(XIE» = log :~~:~:; = logL=p(m ,XI0+)-logL= p(m,X I0_ ) (1) In the above log-likelihoods and discriminant functions we note logarithms of sums (latent likelihood is basically a product of sums) which cause intractabilities. For instance, it is difficult to maximize or integrate the above log-sum quantities. Thus, we need to invoke simplifying bounds. 4 Jensen and EM Bounds Recall the definition of Jensen's inequality: f(E{X}) 2': E{f(X)} for concave f. The log-summations in I, Ie, and £(X IE» all involve a concave f = log around an expectation, i.e. a log-sum or probabilistic mixture over latent variables. We apply Jensen as follows: logL=p(m,XI0) log L= 0'= exp(A=(X =)+X~0=-JC=(0=)) > > p = ,x I0 . ] log p(=,X lEl)+log'" p(m,X I0 ) n p(n,X lEl) p(=,XlEl) i...J= L= [h=] (X~0= - JC=(0=)) +C Above, we have also expanded the bound in the e-family notation. This forms a variational lower bound on the log-sum which makes tangential contact with it at e and is much easier 4These derivations extend to multi-class classification and regression as well. to manipulate. Basically, the log-sum becomes a sum of log-exponential family members. There is an additive constant term C and the positive scalar hm terms (the responsibilities) are given by the terms in the square brackets (here, brackets are for grouping terms and are not operators). These quantities are relatively straightforward to compute. We only require local evaluations of log-sum values at the current E> to compute a global lower bound. If we bound all log-sums in the log-likelihood, we have a lower bound on the objective I which we can maximize easily. Iterating maximization and lower bound computation at the new E> produces a local maximum of log-likelihood as in EM. However, applying Jensen on log-sums in Ie and £(XIE» is not as straightforward. Some terms in these expressions involve negative log-sums and so Jensen is actually solving for an upper bound on those terms. If we want overall lower and upper bounds on Ie and £(XIE», we need to compute reverse-Jensen bounds. 5 Reverse-Jensen Bounds It seems strange we can reverse Jensen (i.e. f(E{X}) ~ E{f(X)}) but it is possible. We need to exploit the convexity of the K functions in the e-family instead of exploiting the concavity of f = log. However, not only does the reverse-bound have to upper-bound the log-sum, it should also have the same form as the Jensen-bound above, i.e. a sum of log-exponential family terms. That way, upper and lower bounds can be combined homogeneously and ML tools can be quickly adapted to the new bounds. We thus need: iogLm cxm exp(Am(Xm)+X;;'0m- K m(0m )) < Lm -[wml (Y!0 m- K m(0m)) +k (2) Here, we give the parameters of the bound directly, refer to the proof at the end of the paper for their algebraic derivation. This bound again makes tangential contact at e yet is an upper bound on the log-sum 5. k iogp(XI0)+ Lm Wm(Y!Elm-Km(Elm)) ~ ( 8K(0m l I -X)+ 8K(0ml I W 7n 88m. em Tn 80m. em • I h th t .!:..m. (8K (0ml I -X)+ 8K(0ml I E 8K(0ml mIn W Tn sue a w ~ 88m 9:m m. 80m. 0 7n 88 m This bound effectively reweights (wm ) and translates (Ym ) incomplete data to obtain complete data. Tighter bounds are possible (i.e. smaller w m ) which also depend on the hm terms (see web page). The first condition requires that the W;" generate a valid Ym that lives in the gradient space of the K functions (a typical e-family constraint). Thus, from local computations of the log-sum's values, gradients and Hessians at the current e, we can compute global upper bounds. 6 Applications and Results In Fig. 2 we plot the bounds for a two-component unidimensional Gaussian mixture model case and a two component binomial (unidimensional multinomial) mixture model. The Jensen-type bounds as well as the reverse-Jensen bounds are shown at various configurations of e and X. Jensen bounds are usually tighter but this is inevitable due to the intrinsic shape of the log-sum. In addition to viewing many such 2D visualizations, we computed higher dimensional bounds and sampled them extensively, empirically verifying that the reverse-Jensen bound remained above the log-sum. Below we describe practical uses of this new reverse-bound. 5We can also find multinomial bounds on a-priors jointly with the E> parameters. 10 ." e, e, 10 (a) Gaussian Case (b) Multinomial Case Figure 2: Jensen (black) and reverse-Jensen (white) bounds on the log-sum (gray). 6.1 Conditional Maximum Likelihood The inequalities above were use to fully lower bound IC and maximizing the bound iteratively. This is like the CEM algorithm [6] 1 except the new bounds handle the whole e-family (i.e. generalized E CEM). The synthetic Gaussian mixture model problem problem por-_1 ~= __ ~ trayed in Fig. 1 was implemented. Both ML and CML estimators (with reverse-bounds) were initialized in the same random configuration and maximized. The Gaussians converged as in Fig. 1. CML (A) -1) 5 10 15 classification accuracy was 93% while ML obtained 59%. Figure 2240"IC I (A) depicts the convergence of IC per iteration under CML (top line) and ML (bottom-line). Similarly, we computed multinomial models for 3-class data as 60 base-pair protein chains in Figure (B). 220 Computationally, utilizing both Jensen and reverse-Jensen bounds (B) 20~ 10 20 30 for optimizing CML needs double the processing as ML using EM. For example, we estimated 2 classes of mixtures of multinomials (5-way mixture) from 40 lO-dimensional data points. In non-optimized Matlab code, ML took 0.57 seconds per epoch while CML took 1.27 seconds due to extra bound computations. Thus, efficiency is close to EM for practical problems. Complexity per epoch roughly scales linearly with sample size, dimensions and number of latent variables. 6.2 Conditional Variational Bayesian Inference In [9], Bayesian integration methods were demonstrated on latent-variable models by invoking Jensen type lower bounds on the integrals of interest. A similar technique can be used to approximate conditional Bayesian integration. Traditionally, we compute the joint Bayesian integral from (X,Y) data as p(X, Y) = f p(X, Y I8)p(8IX ,Y)d8 and condition it to obtain p(Y IX)i (the superscript indicates we initially estimated a joint density). Alternatively, we can compute the conditional Bayesian integral directly. The corresponding dependency graphs (Fig. 3 (b) and (c» depict the differences between j oint and conditional estimation. The conditional Bayesian integral exploits the graph's factorization, to solve p(Y IX) c. p(YIX )c = f p(YIX,ElC)[p(Elclx ,Y )]dElc= f p(YIX,ElC) [ P( YI ;~yjC1) (0") l dElc Jensen and reverse-Jensen bound the terms to permit analytic integration. Iterating this process efficiently converges to an approximation of the true integral. We also exhaustively solved both Bayesian integrals exactly for a 2 Gaussian mixture model on 4 data points. Fig. 3 shows the data and densities. In Fig. 3(d) joint and conditional estimates are inconsistent under Bayesian integration (i.e. P(Y IX )C -j. P(Y IX)j). ~ ~ (a) Data (b) Conditioned Joint (c) Direct Conditional . ~pIYlx/ 7YY fP1;'x( In~ral. ~gral. IX' Y~YIX} Condition (d) Inconsistency Figure 3: Conditioned Joint and Conditional Bayesian Estimates 6.3 Maximum Entropy Discrimination Recently, Maximum Entropy Discrimination (MED) was proposed as an alternative criterion for estimating discriminative exponential densities [4] [5] and was shown to subsume SVMs. The technique integrates over discriminant functions like Eq. 1 but this is intractable under latent variable situations. However, if Jensen and reverse-Jensen bounds are used, the required computations can be done. This permits iterative MED solutions to obtain large margin mixture models and mixtures of SVMs (see web page). 7 Discussion We derived and proved an upper bound on the log-sum of e-farnily distributions that acts as the reverse of the Jensen lower bound. This tool has applications in conditional and discriminative learning for latent variable models. For further results, extensions, etc. see: http://www.media.mit.edu/ ~jebara/bounds. 8 Proof Starting from Eq. 2, we directly compute k and Ym by ensuring the variational bound makes tangential contact with the log-sum at e (i.e. making their value and gradients equal). Substituting k and Y minto Eq. 2, we get constraints on W m via Bregman distances: Define F m(Elm)=IC(Elm)-1C(8m)-(Elm-8m)TIC'(8 m) . The F functions are convex and have a minimum (which is zero) at 8 m • Replace the IC functions with F : Here, D= are constants and z=:=X=-K' (0=). Next, define a mapping from these bowlshaped functions to quadratics: F=(0=) = 9=(<1>=) = !(<I>=-0=f(<I>=-0=) This permits us to rewrite Eq. 2 in terms of <1>: L cxp{ D=+0=(",=)T z =-!)("'=)} T L w=9(<I>=) ~ log 't -T L h=(0=(<1>=)-0=) z= (3) :m m cxP{Dm+0:mZm-OCE>m)} m Let us find properties of the mapping F =9. Take 2nd derivatives over <1>=: K"(0=)~ ~ T +(KI(0=)_KI(0=»)~2~= = 1 Setting 0==0= above, we get the following for a family of such mappings: ~ 18= = [K"(0=)]- 1/ 2. In an e-farnily, we can always find a O;" such that X==K ' (0;"). By convexity of F we create a linear lower bound at 0;": F(0;")+(0=-0;") a ~~~) 10;" ~ F(0=) = 9(<1>=) Take 2nd derivatives over <1>=: F ' (0;") ~2:t:: ~ 1 which is rewritten as: Z a20 m < 1 m. 811>in, In Eq. 3, D=+0=(<I>=)T Z=-9(<I>=) is always concave since its Hessian is: Z= a20,m -1 which a",= is negative. So, we upper bound these terms by a variational linear bound at 0=: L cXP{D~+4>~[KII(0m)]-1/ 2 Zm} L w=9(<I>=) > log t -T L h=(0=(<1>=)-0=)TZ= m m. CXP{Dm+07JLZm-O(E>m)} m Take 2nd derivatives of both sides with respect to each <1>= to obtain (after simplifications): w 1> Z K"(0 )- I ZT - h Z a20m m_m m rn. mmacl>~ If we invoke the constraint on w;", we can replace -h=Z= ~2:,m ~ w;"1. Manipulating, we get the constraint on w= (as a Loewner ordering here), guaranteeing a global upper bound: o 9 Acknowledgments The authors thank T. Minka, T. Jaakkola and K. Pop at for valuable discussions. References [1] Buntine, W. (1994). Operations for learning with graphical models. JAIR 2, 1994. [2] Dempster, AP. and Laird, N.M. and Rubin, D.B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal o/the Royal Statistical Society, B39. [3] Gopalakrishnan, P.S. and Kanevsky, D. and Nadas, A and Nahamoo, D. (1991). An inequality for rational functions with applications to some statistical estimation problems, IEEE Trans. Information Theory, pp. 107-113, Jan. 1991. [4] Jaakkola, T. and Meila, M. and Jebara, T. (1999). Maximum entropy discrimination. NIPS 12. [5] Jebara, T. and Jaakkola, T. (2000). Feature selection and dualities in maximum entropy discrimination. DAI 2000. [6] Jebara, T. and Pentland, A (1998). Maximum conditional likelihood via bound maximization and the CEM algorithm. NIPS 11. [7] Jordan, M. Gharamani, Z. Jaakkola, T. and Saul, L. (1997). An introduction to variational methods for graphical models. Learning in Graphical Models, Kluwer Academic. [8] Pecaric, J.E. and Proschan, F. and Tong, Y.L. (1992). Convex Functions, Partial Orderings, and Statistical Applications. Academic Press. [9] Gharamani, Z. and Beal, M. (1999). Variational Inference for Bayesian Mixture of Factor Analysers, NIPS 12.
2000
39
1,838
Overfitting in Neural Nets: Backpropagation, Conjugate Gradient, and Early Stopping Rich Caruana CALD,CMU 5000 Forbes Ave. Pittsburgh, PA 15213 caruana@cs.cmu.edu Steve Lawrence NEC Research Institute 4 Independence Way Princeton, NJ 08540 lawrence@ research. nj. nec. com Abstract Lee Giles Information Sciences Penn State University University Park, PA 16801 giles@ist.psu.edu The conventional wisdom is that backprop nets with excess hidden units generalize poorly. We show that nets with excess capacity generalize well when trained with backprop and early stopping. Experiments suggest two reasons for this: 1) Overfitting can vary significantly in different regions of the model. Excess capacity allows better fit to regions of high non-linearity, and backprop often avoids overfitting the regions of low non-linearity. 2) Regardless of size, nets learn task subcomponents in similar sequence. Big nets pass through stages similar to those learned by smaller nets. Early stopping can stop training the large net when it generalizes comparably to a smaller net. We also show that conjugate gradient can yield worse generalization because it overfits regions of low non-linearity when learning to fit regions of high non-linearity. 1 Introduction It is commonly believed that large multi-layer perceptrons (MLPs) generalize poorly: nets with too much capacity overfit the training data. Restricting net capacity prevents overfitting because the net has insufficient capacity to learn models that are too complex. This belief is consistent with a VC-dimension analysis of net capacity vs. generalization: the more free parameters in the net the larger the VC-dimension of the hypothesis space, and the less likely the training sample is large enough to select a (nearly) correct hypothesis [2]. Once it became feasible to train large nets on real problems, a number of MLP users noted that the overfitting they expected from nets with excess capacity did not occur. Large nets appeared to generalize as well as smaller nets sometimes better. The earliest report of this that we are aware of is Martin and Pittman in 1991: "We find only marginal and inconsistent indications that constraining net capacity improves generalization" [7]. We present empirical results showing that MLPs with excess capacity often do not overfit. On the contrary, we observe that large nets often generalize better than small nets of sufficient capacity. Backprop appears to use excess capacity to better fit regions of high non-linearity, while still fitting regions of low non-linearity smoothly. (This desirable behavior can disappear if a fast training algorithm such as conjugate gradient is used instead of backprop.) Nets with excess capacity trained with backprop appear first to learn models similar to models learned by smaller nets. If early stopping is used, training of the large net can be halted when the large net's model is similar to models learned by smaller nets. ApprOXlUlatl o D T raining Data x >< T urgel Function '-Vilhau! Noise - 1 Order 10 15~----------~----------~ ApprOXLnlutlon Training Data >< >< TurS .. 1 Function '\Vi l h out N oise - 1 5 O~-----------'~ O----------~20 10 Hidden Nodes - 1 Approxunatlo n T rain ing Data T urg .. t Function 'Without Noise Order 20 - 1 5 O~-----------'~ O----------~20 50 Hidden Nodes Figure 1: Top: Polynomial fit to data from y = sin( x /3) + v . Order 20 overfits. Bottom: Small and large MLPs fit to same data. The large MLP does not overfit significantly more than the small MLP. 2 Overfitting Much has been written about overfitting and the bias/variance tradeoff in neural nets and other machine learning models [2, 12, 4, 8, 5, 13, 6]. The top of Figure 1 illustrates polynomial overfitting. We created a training dataset by evaluating y = sin( x /3) + lJ at 0 1 I I 2, ... ,20 where lJ is a uniformly distributed random variable between -0.25 and 0.25. We fit polynomial models with orders 2-20 to the data. Underfitting occurs with order 2. The fit is good with order 10. As the order (and number of parameters) increases, however, significant overfitting (poor generalization) occurs. At order 20, the polynomial fits the training data well, but interpolates poorly. The bottom of Figure 1 shows MLPs fit to the data. We used a single hidden layer MLP, backpropagation (BP), and 100,000 stochastic updates. The learning rate was reduced linearly to zero from an initial rate of 0.5 (reducing the learning rate improves convergence, and linear reduction performs sintilarly to other schedules [3]). This schedule and number of updates trains the MLPs to completion. (We examine early stopping in Section 4.) As with polynomials, the smallest net with one hidden unit (HU) (4 weights weights) underfits the data. The fit is good with two HU (7 weights). Unlike polynomials, however, networks with 10 HU (31 weights) and 50 HU (151 weights) also yield good models. MLPs with seven times as many parameters as data points trained with BP do not significantly overfit this data. The experiments in Section 4 confirm that this bias of BP-trained MLPs towards smooth models is not limited to the simple 2-D problem used here. 3 Local Overfitting Regularization methods such as weight decay typically assume that overfitting is a global phenomenon. But overfitting can vary significantly in different regions of a model. Figure 2 shows polynomial fits for data generated from the following equation: { - cos( x) + v 0 :::; x < iT Y = cos(3(x - iT)) + V iT:::; X :::; 2iT (Equation 1) Five equally spaced points were generated in the first region, and 15 in the second region, so that the two regions have different data densities and different underlying functions. Overfitting is different in the two regions. In Figure 2 the order 6 model fits the left region O roe r 2 Approxim .. tion Oroe r I'i Approxi m .. tion Training Dat a + T raining D a .a + T .. r get Fun ctio n 'Without No,,.., T .. r ge. Fun ction 'Withou . Noio.c 'oL---~--~--~----~--~--~U Order 2 Order 6 Oroer 10 A.t~..i~\,',';~''::;~ -.T arget Fun ctio n 'Withou. No!o.c Oroer ll'i ~t~..i~i:,';~':'~ -.T arget Fun ctio n ""ithoU! Noise Order 10 Order 16 Figure 2: Polynomial approximation of data from Equation 1 as the order of the model is increased from 2 to 16. The overfitting behavior differs in the left and right hand regions. well, but larger models overfit it. The order 6 model underfits the region on the right, and the order 10 model fits it better. No model performs well on both regions. Figure 3 shows MLPs trained on the same data (20,000 batch updates, learning rate linearly reduced to zero starting at 0.5). Small nets underfit. Larger nets, however, fit the entire function well without significant overfitting in the left region. The ability of MLPs to fit both regions of low and high non-linearity well (without overfitting) depends on the training algorithm. Conjugate gradient (CG) is the most popular second order method. CG results in lower training error for this problem, but overfits significantly. Figure 4 shows results for 10 trials for BP and CG. Large BP nets generalize better on this problem -- even the optimal size CG net is prone to overfitting. The degree of overfitting varies in different regions. When the net is large enough to fit the region of high non-linearity, overfitting is often seen in the region of low non-linearity. 4 Generalization, Network Capacity, and Early Stopping The results in Sections 2 and 3 suggest that BP nets are less prone to overfitting than expected. But MLPs can and do overfit. This section examines overfitting vs. net size on seven problems: NETtalk [10], 7 and 12 bit parity, an inverse kinematic model for a robot arm (thanks to Sebastian Thrun for the simulator), Base 1 and Base 2: two sonar modeling problems using data collected from a robot wondering hallways at CMU, and vision data used to learn to steer an autonomous car [9]. These problems exhibit a variety of characteristics. Some are Boolean. Others are continuous. Some have noise. Others are noise-free. Some have many inputs or outputs. Others have few inputs or outputs. 4.1 Results For each problem we used small training sets (100-1000 points, depending on the problem) so that overfitting was possible. We trained fully connected feedforward MLPs with one hidden layer whose size varied from 2 to 800 HU (about 500-100,000 parameters). All the nets were trained with BP using stochastic updates, learning rate 0.1, and momentum 0.9. We used early stopping for regularization because it doesn't interfere with backprop's ability to control capacity locally. Early stopping combined with backprop is so effective that very large nets can be trained without significant overfitting. Section 4.2 explains why. " / "'/ 1 Hidden Unit 10 Hidden Units 4 Hidden Units 100 Hidden Units Figure 3: MLP approximation using backpropagation (BP) training of data from Equation 1 as the number of hidden units is increased. No significant overfitting can be seen. 07 0 7 06 06 05 OJ 05 0 4 '" ::l 0 4 Z OJ ~ OJ 02 0 2 01 0 1 5 10 25 50 5 10 25 50 Numbe! of Hidden Ncdes Numbei cI Hidden Nodes Figure 4: Test Normalized Mean Squared Error for MLPs trained with BP (left) and CG (right). Results are shown with both box-whiskers plots and the mean plus and minus one standard deviation. Figure 5 shows generalization curves for four of the problems. Examining the results for all seven problems, we observe that on only three (Base 1, Base 2, and ALVINN), do nets that are too large yield worse generalization than smaller networks, but the loss is surprisingly small. Many trials were required before statistical tests confirmed that the differences between the optimal size net and the largest net were significant. Moreover, the results suggest that generalization is hurt more by using a net that is a little too small than by using one that is far too large, i.e., it is better to make nets too large than too small. For most tasks and net sizes, we trained well beyond the point where generalization performance peaked. Because we had complete generalization curves, we noticed something unexpected. On some tasks, small nets overtrained considerably. The NETtalk graph in Figure 5 is a good example. Regularization (e.g., early stopping) is critical for nets of all sizes not just ones that are too big. Nets with restricted capacity can overtrain. 4.2 Why Excess Capacity Does Not Hurt BP nets initialized with small weights can develop large weights only after the number of updates is large. Thus BP nets consider hypotheses with small weights before hypotheses with large weights. Nets with large weights have more representational power, so simple hypotheses are explored before complex hypotheses. " o ... jJ • '0 ... r< • ;> , • 00 o " u NETtalk 0.17 _-..... --,--..... --,----, 0.16 0.15 0.14 0.13 0 . 12 '-----'---'-----'---'----' o 100000 200000 300000 400000 500000 Pattern Presentations Base 1: Average of 10 Runs o . 15 r<I!",,;;-.--t;--,-----.---, 0.14 0.13 0.12 0.11 0.1 0.09 0.08 0.07 0.06 2 hidden units +8 hidden units -+-32 hidden units -8- 8 hidden units ·K··· 2 hidden units -A-.. 0.05 '-----'------'----'----' o 2et06 4et06 6et06 8et06 Pattern Present ations " o ... jJ • '0 ... r< • ;> , 00 00 o " u " o ... jJ • '0 ... r< • ;> , • 00 o " u Inverse Kinematics 0.2 .----..... --,--..... --,----, 0.18 0.16 2 hidden units +8 hidden units -+-32 hidden units -E}-128 hidden unit s .. * .... 512 hidden units ~ .. 0.14 "' __________ ~ 0.12 0.1 0.08 1" •••• 0.06 2et06 4et06 6et06 8et06 1et07 Pattern Presentations Base 2 : Average of 10 Runs O. 22 ,"~-.----,------.----, 0.21 0.2 0.18 0.17 0.16 0.15 0.14 0.13 2 hidden units +8 hidden units -+-32 hidden units ·8 ·· 128 hidden units ·X···· 512 hidden units -A-.. 0.12 '-----'------'----'----' o 200000 400000 600000 800000 Pattern Presentations Figure 5: Generalization peiformance vs. net size for four of the seven test problems. We analyzed what nets of different size learn while they are trained. We compared the input/output behavior of nets at different stages of learning on large samples of test patterns. We compare the input/output behavior of two nets by computing the squared error between the predictions made by the two nets. If two nets make the same predictions for all test cases, they have learned the same model (even though each model is represented differently), and the squared error between the two models is zero. If two nets make different predictions for test cases, they have learned different models, and the squared error between them is large. This is not the error the models make predicting the true labels, but the difference between predictions made by two different models. Two models can have poor generalization (large error on true labels), but have near zero error compared to each other if they are similar models. But two models with good generalization (low error on true labels) must have low error compared to each other. The first graph in Figure 5 shows learning curves for nets with 10,25, 50, 100, 200, and 400 HU trained on NETtalk. For each size, we saved the net from the epoch that generalized best on a large test set. This gives us the best model of each size found by backprop. We then trained a BP net with 800 HU, and after each epoch compared this net's model with the best models saved for nets of 10-400 HU. This lets us compare the sequence of models learned by the 800 HU net to the best models learned by smaller nets. Figure 6 shows this comparison. The horizontal axis is the number of backprop passes applied to the 800 HU net. The vertical axis is the error between the 800 HU net model and the best model for each smaller net. The 800 HU net starts off distant from the good smaller models, then becomes similar to the good models, and then diverges from them. This is expected. What is interesting is that the 800 HU net first becomes closest to the best Similarity of BOO HU Net DUring T raining to S maller Size Peak Performers 1000 ,....,-,-,--------,r'--------"----,--"---------r--------, H~ t ~ .%>< ~ ,* Xx x t ~: '" x E "',;t: x 800 b .!lo, ot: X x >'1: '~1 600 400 200 -,10hu pea k ....r-25hu pea k -+- 50hu pea k E} 100hu peak )( 200hu pea k -6400hu peak ...,.,... ----------+ ----;""':'-: li; ::::: -:: -- ----- -- -- -- =- = =:. -=- ----it---: o ~----~~----~-----~-----~ o 50000 100000 150000 200000 Pattern Presentat IOns Figure 6: I/O similarity during training between an 800 hidden unit net and smaller nets (10, 25, 50, 100,200, and 400 hldden units) trained on NETtalk. 10 HU net, then closest to the 25 HU net, then closest to the 50 HU net, etc. As it is trained, the 800 HU net learns a sequence of models similar to the models learned by smaller nets. If early stopping is used, training of the 800 HU net can be stopped when it behaves similar to the best model that could be learned with nets of 10, 25, 50, . .. HU. Large BP nets learn models similar to those learned by smaller nets. If a BP net with too much capacity would overjit, early stopping could stop training when the model was similar to a model that would have been learned by a smaller net of optimal size. The error between models is about 200-400, yet the generalization error is about 1600. The models are much closer to each other than any of them are to the true model. With early stopping, what counts is the closest approach of each model to the target function, not where models end up late in training. With early stopping there is little disadvantage to using models that are too large because their learning trajectories are similar to those followed by smaller nets of more optimal size. 5 Related Work Our results show that models learned by backprop are biased towards "smooth" solutions. As nets with excess capacity are trained, they first explore smoother models similar to the models smaller nets would have learned. Weigend [11] performed an experiment that showed BP nets learn a problem's eigenvectors in sequence, learning the 1st eigenvector first, then the 2nd, etc. His result complements our analysis of what nets of different sizes learn: if large nets learn an eigenvector sequence similar to smaller nets, then the models learned by the large net will pass through intermediate stages similar to what is learned by small nets (but iff nets of different sizes learn the eigenvectors equally well, which is an assumption we do not need to make.) Theoretical work by [1] supports our results. Bartlett notes: "the VC-bounds seem loose; neural nets often peiform successfully with training sets that are considerably smaller than the number of weights." Bartlett shows (for classification) that the number of training samples only needs to grow according to A 21 (ignoring log factors) to avoid overfitting, where A is a bound on the total weight magnitudes and I is the number of layers in the network. This result suggests that a net with smaller weights will generalize better than a similar net with large weights. Examining the weights from BP and CG nets shows that BP training typically results in smaller weights. 6 Summary Nets of all sizes overfit some problems. But generalization is surprisingly insensitive to excess capacity if the net is trained with backprop. Because BP nets with excess capacity learn a sequence of models functionally similar to what smaller nets learn, early stopping can often be used to stop training large nets when they have learned models similar to those learned by smaller nets of optimal size. This means there is little loss in generalization performance for nets with excess capacity if early stopping can be used. Overfitting is not a global phenomenon, although methods for controlling it often assume that it is. Overfitting can vary significantly in different regions of the model. MLPs trained with BP use excess parameters to improve fit in regions of high non-linearity, while not significantly overfitting other regions. Nets trained with conjugate gradient, however, are more sensitive to net size. BP nets appear to be better than CG nets at avoiding overfitting in regions with different degrees of non-linearity, perhaps because CG is more effective at learning more complex functions that overfit training data, while BP is biased toward learning smoother functions. References [1] Peter L. Bartlett. For valid generalization the size of the weights is more important than the size of the network. In Advances in Neural Information Processing Systems, volume 9, page 134. The MIT Press, 1997. [2] E.B. Baum and D. Haussler. What size net gives valid generalization? Neural Computation, 1(1):151- 160,1989. [3] C. Darken and J.E. Moody. Note on learning rate schedules for stochastic optimization. In Advances in Neural Information Processing Systems, volume 3, pages 832- 838. Morgan Kaufmann, 1991. [4] S. Geman et al. Neural networks and the bias/variance dilemma. Neural Computation, 4(1):158,1992. [5] A Krogh and J.A Hertz. A simple weight decay can improve generalization. In Advances in Neural Information Processing Systems, volume 4, pages 950-957. Morgan Kaufmann, 1992. [6] Y. Le Cun, J.S. Denker, and S.A Solla. Optimal Brain Damage. In D.S. Touretzky, editor, Advances in Neural Information Processing Systems, volume 2, pages 598-605, San Mateo, 1990. (Denver 1989), Morgan Kaufmann. [7] G.L. Martin and J.A Pittman. Recognizing hand-printed letters and digits using backpropagation learning. Neural Computation, 3:258-267, 1991. [8] J.E. Moody. The effective number of parameters: An analysis of generalization and regularization in nonlinear learning systems. In Advances in Neural Information Processing Systems, volume 4, pages 847-854. Morgan Kaufmann, 1992. [9] D.A Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In D.S. Touretzky, editor, Advances in Neural Information Processing Systems, volume 1, pages 305-313, San Mateo, 1989. (Denver 1988), Morgan Kaufmann. [10] T. Sejnowski and C. Rosenberg. Parallel networks that learn to pronounce English text. Complex Systems, 1:145-168, 1987. [11] A Weigend. On overfitting and the effective number of hidden units. In Proceedings of the 1993 Connectionist Models Summer School, pages 335- 342. Lawrence Erlbaum Associates, 1993. [12] AS. Weigend, D.E. Rumelhart, and B.A Huberman. Generalization by weight-elimination with application to forecasting. In Advances in Neural Information Processing Systems, volume 3, pages 875-882. Morgan Kaufmann, 1991. [13] D. Wolpert. On bias plus variance. Neural Computation, 9(6):1211-1243, 1997.
2000
4
1,839
From Mixtures of Mixtures to Adaptive Transform Coding Cynthia Archer and Todd K. Leen Department of Computer Science and Engineering Oregon Graduate Institute of Science & Technology 20000 N.W. Walker Rd, Beaverton, OR 97006-1000 E-mail: archer, tleen@cse.ogi.edu Abstract We establish a principled framework for adaptive transform coding. Transform coders are often constructed by concatenating an ad hoc choice of transform with suboptimal bit allocation and quantizer design. Instead, we start from a probabilistic latent variable model in the form of a mixture of constrained Gaussian mixtures. From this model we derive a transform coding algorithm, which is a constrained version of the generalized Lloyd algorithm for vector quantizer design. A byproduct of our derivation is the introduction of a new transform basis, which unlike other transforms (PCA, DCT, etc.) is explicitly optimized for coding. Image compression experiments show adaptive transform coders designed with our algorithm improve compressed image signal-to-noise ratio up to 3 dB compared to global transform coding and 0.5 to 2 dB compared to other adaptive transform coders. 1 Introduction Compression algorithms for image and video signals often use transform coding as a low-complexity alternative to vector quantization (VQ). Transform coders compress multi-dimensional data by transforming the signal vectors to new coordinates and coding the transform coefficients independently of one another with scalar quantizers. The coordinate transform may be fixed a priori as in the discrete cosine transform (DCT). It can also be adapted to the signal statistics using, for example, principal component analysis (PCA), where the goal is to concentrate signal energy in a few signal components. Noting that signals such as images and speech are nonstationary, several researchers have developed non-linear [1, 2] and local linear or adaptive [3,4] PCA transforms for dimension reduction!. None of these transforms are designed to minimize compression distortion nor are they designed in concert with quantizer development. lIn dimension reduction the original d-dimensional signal is projected onto a subspace or submanifold of lower dimension. The retained coordinates are not quantized. Several researchers have extended the idea of local linear transforms to transform coding [5, 6, 7]. In these adaptive transform coders, the signal space is partitioned into disjoint regions and a transform and set of scalar quantizers are designed for each region. In our own previous work [7], we use k-means partitioning to define the regions. Dony and Haykin [5] partition the space to minimize dimension-reduction error. Tipping and Bishop [6] use soft partitioning according to a probabilistic rule that reduces, in the appropriate limit, to partitioning by dimension-reduction error. These systems neither design transforms nor partition the signal space with the goal of minimizing compression distortion. This ad hoc construction contrasts sharply with the solid grounding of vector quantization. Nowlan [8] develops a probabilistic framework for VQ by demonstrating the correspondence between a VQ and a mixture of spherically symmetric Gaussians. In the limit that the mixture component variance goes to zero, the ExpectationMaximization (EM) procedure for fitting the mixture model to data becomes identical to the Linde-Buzo-Gray (LBG) algorithm [9] for vector quantizer design. This paper develops a similar grounding for both global and adaptive (local) transform coding. We define a constrained mixture of Gaussians model that provides a framework for transform coder design. Our new design algorithm is simply a constrained version of the LBG algorithm. It iteratively optimizes the signal space partition, the local transforms, the allocation of coding bits, and the scalar quantizer reproduction values until it reaches a local distortion minimum. This approach leads to two new results, an orthogonal transform and a method of partitioning the signal space, both designed to minimize coding error. 2 Global Transform Coder Model In this section, we develop a constrained mixture of Gaussians model that provides a probabilistic framework for global transform coding. 2.1 Latent Variable Model A transform coder converts a signal to new coordinates and then codes the coordinate values independently of one another with scalar quantizers. To replicate this structure, we envision the data as drawn from a d-dimensionallatent data space, S, in which the density p( 8) = p( 81,82, ... ,8d) is a product of the marginal densities, PJ(8J), J = 1. . . d. a 11,12 I ~ - r2· 12 s· r1· 11S I 1 Figure 1: Structure of latent variable space, S, and mapping to observed space, X. The latent data density consists of a mixture of spherical Gaussians with component means qa constrained to lie at the vertices of a rectangular grid. The latent data is mapped to the observed space by an orthogonal transform, W. We model the density in the latent space with a constrained mixture of Gaussian densities K p(s) = L 7ra P(sla) (1) a=l where 7ra are the mixing coefficients and p(sla) = N(qa, ~a) is Gaussian with mean qa and variance ~a. The mixture component means, qa, lie at the vertices of a rectangular grid as illustrated in figure (1). The coordinates of qa are [rliu r2i2" .. , rdiJ T , where r JiJ is the i~h grid mark on the SJ axis. There are KJ grid mark values on the SJ axis, so the total number of grid vertices K = TIJ KJ. We constrain the mixture components variances, ~a to be spherically symmetric with the same variance, (721, with 1 the identity matrix. We do not fit (72 to the data, but treat it as a "knob", which we will turn to zero to reveal a transform coder. These mean and variance constraints yield marginal densities PJ(sJliJ) = N(r JiJ> (72). We write the density of S conditioned on a as d p(sla) =P(Sl, ... ,Sdla(i1, ... ,id)) = IIpJ(SJliJ). (2) J=l and constrain each 7ra to be a product of prior probabilities, 7ra (il, ... ,id) = TIJ PJir Incorporating these constraints into (1) and noting that the sum over the mixture components a is equivalent to sums over all grid mark values, the latent density becomes Kl K2 Kd d KJ p(s) = L L ... L IIpJiJ PJ(sJliJ) = II L PJiJ PJ(sJliJ)' (3) where the second equality comes by regrouping terms. The latent data is mapped to the observation space by an orthogonal transformation, W (figure 1). Using p(xls) = c5(x - Ws - fJ,) and (1), the density on observed data x conditioned on component a is p(xla) = N(W qa + fJ" (721). The total density on x is K p(x) = L 7ra p(xla) . (4) a=l The data log likelihood for N data vectors, {xn , n = 1 ... N}, averaged over the posterior probabilities p(alxn) is (5) 2.2 Model Fitting and Transform Coder design The model (4) can be fit to data using the EM algorithm. In the limit that the variance of the mixture components goes to zero, the EM procedure for fitting the mixture model to data corresponds to a constrained LBG (CLBG) algorithm for optimal transform coder design. In the limit (72 -+ 0 the entropy term, In 7ra , becomes insignificant and the component posteriors collapse to (6) Each data vector is assigned to the component whose mean has the smallest Euclidean distance to it. These assignments minimize mean squared error. In the limit that (72 -+ 0, maximizing the likelihood (5) is equivalent to minimizing compression distortion 1 D = L 7ra N L Ix - Wqa _1-£1 2 (7) a a xER", where Ra = {x Ip(alx) = I}, Na is the number of x ERa, and 7ra = Na/N. To optimize the transform, we find the orientation of the current quantizer grid which minimizes (7). The transform, W, is constrained to be orthogonal, that is WTW = I. We first define the matrix of outer products Q Q = L 7raqa (~ L (x-I-£f) . a a xER", (8) Minimizing the distortion (7) with respect to some element of W and using Lagrange multipliers to enforce the orthogonality of W yields the condition QW = WTQT (9) or QW is symmetric. This symmetry condition and the orthogonality condition, WTW = I, uniquely determine the coding optimal transform (COT) W. The COT reduces to the PCA transform when the data is Gaussian. However, in general the COT differs from PCA. For instance in global transform coding trials on a variety of grayscale images, the COT improves signal-to-noise ratio (SNR) relative to PCA by 0.2 to 0.35 dB for fixed-rate coding at 1.0 bits per pixel (bpp). For variable-rate coding, SNR improvement due to using the COT is substantial, 0.3 to 1.2 dB for entropies of 0.25 to 1.25 bpp. We next minimize (7) with respect to the grid mark values, r JiJ' for J = 1 .. . d and iJ = 1 . .. K J and the number of grid values K J for each coordinate. It is advantageous to rewrite compression distortion as the sum of distortions D = LJ DJ due to quantizing the transform coefficients SJ = WJ x, where WJ is the Jfh column vector of W. The rJiJ grid mark values that minimize each DJ are the reproduction values of a scalar Lloyd quantizer [10] designed for the transform coefficients, SJ. KJ is the number of reproduction values in the quantizer for transform coordinate J. Allocating the log2 (K) coding bits among the transform coordinates so that we minimize distortion [11] determines the optimal KJ's. 3 Local Transform Coder Model In this section, we develop a mixture of constrained Gaussian mixtures model that provides a probabilistic framework for adaptive transform coding. 3.1 Latent Variable Model A local or adaptive transform coder identifies regions in data space that require different quantizer grids and orthogonal transforms. A separate transform coder is designed for each of these regions. To replicate this structure, we envision the observed data as drawn from one of M grids in the latent space. The latent variables, s, are modeled with a mixture of Gaussian densities, where the mixture components are constrained to lie at the grid vertices. Each grid has the same number of mixture components, K, however the number and spacing of grid marks on each axis can differ. This is illustrated schematically (in the hard-clustering limit) in figure 2. 51 ~ q a. W(2)5 + J.L(2) Figure 2: Nonstationary data model: Structure of latent variable space, S, and mapping (in the hard clustering limit) to observed space, X. The density in the latent space consists of mixtures of spherically symmetric Gaussians. The mixture component means, qim), lie at the vertices the mth grid. Latent data is mapped to the observation space by W(m). The density on s conditioned on a single mixture component, 0: in grid m,2 is p(slo:, m) = N(q~m), 0-21). The latent density is a mixture of constrained Gaussian mixture densities, M K p(s) = L'lrm LP(o:lm) p(slo:, m) (10) m=l 0=1 The latent data is mapped to the observation space by orthonormal transforms w(m). The density on x conditioned on 0: in grid m is p(xlo:, m) = N(w(m)q~m) + p,(m),0-21). The observed data density is M K p(x) = L'lrm L p(o:lm) p(xlo:, m) (11) m=l 0=1 3.2 Optimal Adaptive Transform Coder Design In the limit that 0-2 -t 0, the EM procedure for fitting this model corresponds to a constrained LBG algorithm for adaptive transform coder design. As before, a single mixture component becomes responsible for Xn 0: m x -t { 1 if Ix - w(m)q~m) p,(m) 12 ::; Ix - w(m)q~m) p,(m) 12 '<I fil, 'Y p( , I) 0 otherwise (12) The coding optimal partition assigns each data vector to the region, m, whose transform coder compresses it with the least distortion. This differs from prior methods that use other partitioning criteria such as K-means clustering or Local peA partitioning. In K-means clustering, a data vector is assigned to the coder whose mean has the smallest Euclidean distance to it. Local peA partitions the data space to minimize dimension reduction error [3], not the coding error. Local peA requires a priori selection of a target dimension, instead of allowing the dimension to be optimized for the desired level of compression. To minimize distortion with respect to the transform coders, we can optimize the parameters of each region separately. A region's parameters are estimated from just the data vectors assigned to it. We find each region's transform and the number and placement of grid mark values as we did for the global transform coder. 2Each grid has its own mixture component index, am. We drop the m subscript from a to simplify notation. 4 Adaptive Transform Coding Results We find the adaptive transform coder for a set of images by applying our algorithm to a training image. The data vectors are 8 x 8 image pixel blocks. Then we compress a test image using the resulting transform coder. We measure compressed test image quality with signal-to-noise ratio, SNR = 10log1o (pixel variance/MSE), where MSE is the per pixel mean-squared coding error. Our implementation modifies codebook optimization to reduce computational requirements. First, instead of using optimal bit allocation, we use a greedy algorithm [12], which allocates bits one at a time to the coordinate with the largest distortion. In global transform coding trials (0.375 to 0.75 bpp), this substitution reduced SNR by < 0.1 dB. Second, instead of using the coding optimal transform (9), we use the peA transform. In global transform coding trials (0.25 to 0.75 bpp), this substitution reduced SNR by 0.05 to 0.27 dB. We report on compression experiments using two types of images, Magnetic Resonance Images (MRI) and gray-scale natural images of traffic moving through street intersections. These MRI images were used by Dony and Haykin in [5] and we duplicate their image pre-processing. One MRI image is decomposed into overlapping 8 x 8 blocks to form 15,625 training vectors; a second image is used for testing. The traffic images are frames from two video sequences. We use frames from the first half of both sequences for training and frames from the last halves for testing. 21 ~-~--~-~--~-~ 20 CC19 B a iii 18 0: ~ ~17 z I ~16 ~ ~15 14 1 3 '---~--~-~--~---' 0.3 0.4 0.5 0.6 0.7 0.6 Compressed Bit-rate (bpp) (a) MRl test image SNR. All adaptive coders have 16 regions. 15~-~--~-~--~-~ 14 7 '---~--~-~--~---' 0.3 0.4 0.5 0.6 0.7 0.6 Compressed bit-rate (bpp) (b) Traffic test image SNR. All adaptive coders have 32 regions. Figure 3: The x is our coding optimal partition, 0 local peA partition with dimension eight, e k-means clustering, and + is global peA. The dotted line values are local peA results from [5]. Errorbars indicate standard deviation of 8 trials. Figure 3 shows compressed test image SNR for four compressed bit-rates and four compression methods. The quoted bit-rates include the bits necessary to specify region assignments. The x results are for our transform coder which uses coding optimal partitioning. Our system increases SNR compared to global peA (+) by 2.3 to 3.0 dB, k-means clustering (e) by 1.1 to 1.8 dB and local peA partitioning with target dimension eight (0) by 0.5 to 2.0 dB. In addition, our system yields image SNRs 1.6 to 3.0 dB higher that Dony and Haykin's local peA transform coder (dimension eight) [5]. Their local peA coder does not use optimal bit allocation or Lloyd quantizers, which further reduces compressed image SNR. 5 Summary In this paper, we cast the design of both conventional and adaptive transform coders as a constrained optimization procedure. We derive our algorithm from the EM procedure for fitting a mixture of mixtures model to data. In contrast to standard transform coder design, all operations: partitioning the signal space (for the adaptive case), transform design, allocation of coding bits, and quantizer design, are coupled together to minimize compression distortion. This approach leads to a new transform basis that is optimized for coding. The coding optimal transform is in general different from PCA. This approach also leads to a method of data space partitioning that is optimized for coding. This method assigns each signal vector to the coder the compresses it with the least distortion. Our empirical results show marked SNR improvement (0.5 to 2 dB) relative to other partitioning methods. Acknow ledgeIllents The authors wish to thank Robert Dony and Simon Haykin for the use of their MRI image data and the Institute fur Algorithmen und Kognitive Systems, Universitiit Karlsruhe for making their traffic images available. This work was funded by NSF under grants ECS-9704094 and ECS-9976452. References [1] Mark A. Kramer. Nonlinear prinipal component analysis using autoassociative neural networks. AIChE journal, 37(2):233- 243, February 1991. [2] David DeMers and Garrison Cottrell. Non-linear dimensionality reduction. In Giles, Hanson, and Cowan, editors, Advances in Neural Information Processing Systems 5, San Mateo, CA, 1993. Morgan Kaufmann. [3] N anda Kambhatla and Todd K. Leen. Fast non-linear dimension reduction. In Cowan, Tesauro, and Alspector, editors, Advances in Neural Information Processing Systems 6, pages 152- 159. Morgan Kauffmann, Feb 1994. [4] G. Hinton, M. Revow, and P. Dayan. Recognizing handwritten digits using mixtures of linear models. In Tesauro, Touretzky, and Leen, editors, Advances in Neural Information Processing Systems 7, pages 1015-1022. MIT Press, 1995. [5] Robert D. Dony and Simon Haykin. Optimally adaptive transform coding. IEEE Transactions on Image Processing, 4(10):1358- 1370, 1995. [6] M. Tipping and C. Bishop. Mixture of probabilistic principal component analyzers. Neural Computation, 11(2):443- 483, 1999. [7] C. Archer and T.K. Leen. Optimal dimension reduction and transform coding with mixture principal components. In Proceedings of International Joint Conference on Neural Networks, July 1999. [8] Steve Nowlan. Soft Competitive Adaptation: neural network learning algorithms based on fitting statistical mixtures. PhD thesis, School of Computer Science, Carnegie Mellon University, 1991. [9] Y. Linde, A. Buzo, and R.M. Gray. An algorithm for vector quantizer design. IEEE Transactions on Communications, 28(1):84- 95, January 1980. [10] S. Lloyd. Least square optimization in PCM. IEEE Transactions on Information Theory, 28(2):129- 137, 1982. [11] Eve A. Riskin. Optimal bit allocation via the generalized BFOS algorithm. IEEE Transactions on Information Theory, 37(2):400- 402, 1991. [12] A. Gersho and R. Gray. Vector Quantization and Signal Compression. Kluwer Academic, 1992.
2000
40
1,840
Algorithmic Stability and Generalization Performance Olivier Bousquet CMAP Ecole Polytechnique F-91128 Palaiseau cedex FRANCE bousquet@cmapx.polytechnique·fr Andre Elisseeff'" Barnhill Technologies 6709 Waters A venue Savannah, GA 31406 USA andre@barnhilltechnologies.com Abstract We present a novel way of obtaining PAC-style bounds on the generalization error of learning algorithms, explicitly using their stability properties. A stable learner is one for which the learned solution does not change much with small changes in the training set. The bounds we obtain do not depend on any measure of the complexity of the hypothesis space (e.g. VC dimension) but rather depend on how the learning algorithm searches this space, and can thus be applied even when the VC dimension is infinite. We demonstrate that regularization networks possess the required stability property and apply our method to obtain new bounds on their generalization performance. 1 Introduction A key issue in computational learning theory is to bound the generalization error of learning algorithms. Until recently, most of the research in that area has focused on uniform a-priori bounds giving a guarantee that the difference between the training error and the test error is uniformly small for any hypothesis in a given class. These bounds are usually expressed in terms of combinatorial quantities such as VCdimension. In the last few years, researchers have tried to use more refined quantities to either estimate the complexity of the search space (e.g. covering numbers [1]) or to use a posteriori information about the solution found by the algorithm (e.g. margin [11]). There exist other approaches such as observed VC dimension [12], but all are concerned with structural properties of the learning systems. In this paper we present a novel way of obtaining PAC bounds for specific algorithms explicitly using their stability properties. The notion of stability, introduced by Devroye and Wagner [4] in the context of classification for the analysis of the Leave-oneout error and further refined by Kearns and Ron [8] is used here in the context of regression in order to get bounds on the empirical error rather than the leaveone-out error. This method has the nice advantage of providing bounds that do -This work was done while the author was at Laboratoire ERIC, Universite Lumiere Lyon 2,5 avenue Pierre Mendes-France, F-69676 Bron cedex, FRANCE not depend on any complexity measure of the search space (e.g. VC-dimension or covering numbers) but rather on the way the algorithm searches this space. In that respect, our approach can be related to Freund's [7] where the estimated size of the subset of the hypothesis space actually searched by the algorithm is used to bound its generalization error. However Freund's result depends on a complexity term which we do not have here since we are not looking separately at the hypotheses considered by the algorithm and their risk. The paper is structured as follows: next section introduces the notations and the definition of stability used throughout the paper. Section 3 presents our main result as a PAC-like theorem. In Section 4 we will prove that regularization networks are stable and apply the main result to obtain bounds on their generalization ability. A discussion of the results will be presented in Section 5. 2 Notations and Definitions X and 'lJ being respectively an input and an output space, we consider a learning set S = {Zl = (Xl, yd, .. , Zm = (xm, Ym)} of size m in Z = X x 'lJ drawn i.i.d. from an unknown distribution D. A learning algorithm is a function L from zm into 'lJx mapping a learning set S onto a function Is from X to 'lJ. To avoid complex notations, we consider only deterministic algorithms. It is also assumed that the algorithm A is symmetric with respect to S, i. e. for any permutation over the elements of S, Is yields the same result. Furthermore, we assume that all functions are measurable and all sets are countable which does not limit the interest of the results presented here. The empirical error of a function I measured on the training set Sis: 1 m Rm(f) = - L c(f, Zi) m i=l c: 'lJx X X x 'lJ -t 1R+ being a cost function. The risk or generalization error can be written as: R(f) = Ez~D [c(f,z)] The study we describe here intends to bound the difference between empirical and generalization error for specific algorithms. More precisely, our goal is to bound for any E > 0, the term (1) Usually, learning algorithms cannot output just any function in 'lJx but rather pick a function Is in a set :r S;; 'lJX representing the structure or the architecture or the model. Classical VC theory deals with structural properties and aims at bounding the following quantity: PS~Dm [sup IRm(f) - R(f)1 > EJ fE'3' (2) This applies to any algorithm using :r as a hypothesis space and a bound on this quantity directly implies a similar bound on (1). However, classical bounds require the VC dimension of :r to be finite and do not use information about algorithmic properties. For a set :r, there exists many ways to search it which may yield different performance. For instance, multilayer perceptrons can be learned by a simple backpropagation algorithm or combined with a weight decay procedure. The outcome of the algorithm belongs in both cases to the same set of functions, although their performance can be different. VC theory was initially motivated by empirical risk minimization (ERM) in which case the uniform bounds on the quantity (2) give tight error bounds. Intuitively, the empirical risk minimization principle relies on a uniform law of large numbers. Because it is not known in advance, what will be the minimum of the empirical risk, it is necessary to study the difference between empirical and generalization error for all possible functions in 9". If, now, we do not consider this minimum, but instead, we focus on the outcome of a learning algorithm A, we may then know a little bit more what kind of functions will be obtained. This limits the possibilities and restricts the supremum over all the functions in 9" to the possible outcomes of the algorithm. An algorithm which always outputs the null function does not need to be studied by a uniform law of large numbers. Let's introduce a notation for modified training sets: if S denotes the initial training set, S = {Zl, ... ,Zi-l,Zi,Zi+l, ... ,zm}, then Si denotes the training set after Zi has been replaced by a different training example z~, that is Si = {Zl, ... , Zi-l, z~, Zi+1, . .. , zm}. Now, we define a notion of stability for regression. Definition 1 (UniforIll stability) Let S = {Zl, ... ,zm} be a training set, Si = S\Zi be the training set where instance i has been removed and A a symmetric algorithm. We say that A is (3-stable if the following holds: '<IS E zm, '<Iz~,z E Z, Ic(fs,z) - c(fsi,z)l:::; {3 (3) This condition expresses that for any possible training set S and any replacement example z~, the difference in cost (measured on any instance in Z) incurred by the learning algorithm when training on S and on Si is smaller than some constant {3. 3 Main result A stable algorithm, i. e. {3-stable with a small {3, has the property that replacing one element in its learning set does not change much its outcome. As a consequence, the empirical error, if thought as a random variable, should have a small variance. Stable algorithms can then be good candidates for their empirical error to be close to their generalization error. This assertion is formulated in the following theorem: Theorem 2 Let A be a (3-stable algorithm, such that ° :::; c(fs, Z) :::; M, for all Z E Z and all learning set S. For all E > 0, for any m ~ 8~C , we have: 64Mm{3 + 8M2 PS~D'" [lRm(fS) - R(fs) I > E] :::; 2 mE (4) and for any m ~ 1, (5) Notice that this theorem gives tight bounds when the stability (3 is of the order of 11m. It will be proved in next section that regularization networks satisfy this requirement. In order to prove theorem 2, one has to study the random variable X = R(fs) Rm(fs), which can be done using two different approaches. The first one (corresponding to the exponential inequality) uses a classical martingale inequality and is detailed below. The second one is a bit more technical and requires to use standard proof techniques such as symmetrization. Here we only briefly sketch this proof and refer the reader to [5] for more details. Proof of inequality (5) We use the following theorem: Theorem 3 {McDiarmid [9}}. Let Y1 , .. . , Yn be n i.i.d. random variables taking values in a set A, and assume that F : An -+ ~ satisfies for I ~ i ~ n: sup IF(Yl, ... ,Yn)-F(Yl, ... ,Yi-l,Y~,Yi+1, ... ,Yn)1 ~Ci then In order to apply theorem 3, we have to bound the expectation of X. We begin with a useful lemma: Lemma 1 For any symmetric learning algorithm we have for all I ~ i ~ m: ES~D~ [R(fs) - Rm(fs)] = ES,z:~D~+l [c(fs, Z~) - c(fs" zD] Proof: Notice that I m ES~D'" [Rm(fs)] = 2:ES~D'" [c(fs,Zj)] = ES~D'" [c(fS,Zi)], 'Vi E {I, ... ,m} m j=l by symmetry and the i.i.d. assumption. Now, by simple renaming of Zi as z~ we get ES~D'" [Rm(fs)] = ES'~D'" [c(fs.,zDJ = ES,Z:~D~+l [c(fs.,zD] which, with the observation that ES~D~ [R(fs)] = ES,z:~D"' +l [c(fs, zD] concludes the proof. Using the above lemma and the fact that A is ,B-stable, we easily get: ES~D'" [R(fs) - Rm(fs)] ~ ES,z:~D"'+l [,B] = ,B We now have to compute the constants Ci appearing in theorem 3. We have IR(fs) - R(fs·) I ~ Ez~D [Ic(fs, z) - c(fs.,z)l] ~,B and IRm(fs) - Rm(fsi)1 < ~ 2: Ic(fs,zj) - c(fsi,zj)1 + ~ IC(fS,Zi) - c(fs·,zDI m #i m < 2M +,B m Theorem 3 applied to R(fs) - Rm(fs) then gives inequality (5). Sketch of the proof of inequality (4) Recall Chebyshev's inequality: o E[X2] P(IXI ;:::: €) ~ -2-' (6) € for any random variable X. In order to apply this inequality, we have to bound E[X2]. This can be done with a similar reasoning as for the expectation. Calculations are however more complex and we do not describe them here. For more details, see [5]. The result is the following: Eu[X2] ~ M2/m + 8M,B which with (6) gives inequality (4) and concludes the proof. 4 Stability of Regularization Networks 4.1 Definitions Regularization networks have been introduced in machine learning by Poggio and Girosi [10]. The relationship between these networks and the Support Vector Machines, as well as their Bayesian interpretation, make them very attractive. We consider a training set S = {(Xl, Yl), ... ,(xm, Ym)} with Xi E IRd and Yi E IR, that is we are in the regression setting. The regularization network technique consists in finding a function I : IRd -t IR in a space H which minimizes the following functional: (7) where II/II~ denotes the norm in the space H. In this framework, H is chosen to be a reproducing kernel Hilbert space (rkhs), which is basically a functional space endowed with a dot productl. A rkhs is defined by a kernel function, that is a symmetric function k : IRd X IRd -t IR which we will assume to be bounded by a constant K in what follows2 . In particular the following property will hold: (8) 4.2 Stability study In this section, we show that regularization networks are, furthermore, stable as soon as A is not too small. Theorem 4 For regularization networks with Ilkli H ~ K and (f(x) - y)2 ~ M, ( 2K2 4K 2) In(2/6) A2 + A + m (9) and R(fs) ~ Rm(fs) + 2M ( 64K + 2) ...!... A m6 (10) Proof: Let's denote by Is the minimizer of C. Let's define Let lSi be the minimizer of Ci and let 9 denote the difference lSi - Is. By simple algebra, we have for t E [0,1] 2t m C(fs) - C(fs + tg) = -- ~)Is(xj) - Yj)g(Xj) - 2tA < Is,g > +t2 A(g) m j=l lWe do not detail here the properties of such a space and refer the reader to [2, 3] for additional details. 20nce again we do not give full detail of the definition of appropriate kernel functions and refer the reader to [3]. where A(g) which is not explicitly written here is the factor of t2 . Similarly we have 2t m l)fSi(Xj) -Yj)g(Xj) #i + 2t (fsi (xD - YDg(xD + 2tA < fSi, 9 > +t2 Ai(g) m By optimality, we have C(fs) - C(fs + tg) :::; 0 and Ci(fsi) - Ci(fsi - tg) :::; 0 thus, summing those inequalities, dividing by tim and making t -t 0, we get 2 L g(Xj)2 - 2(fS(Xi) - Yi)g(Xi) + 2(fsi (xD YDg(x~) + 2mAIIgilir :::; 0 #i which gives mAllgllir :::; (fS(Xi) - Yi)g(Xi) (fs(x~) - YDg(xD :::; 2VMI\;IIgliH using (8). We thus obtain Ilfsi - fsllH :::; 2VMI\;/(mA) and also Vx,y I (fs(x) - y)2 - (fsi(X) - y)21:::; 2VMlfs(x) - fSi(X)1 :::; 4MI\;/(mA) (11) We thus proved that the minimization of C[f] is a 4:;>.'" -stable procedure which allows to apply theorem 2. <>. 4.3 Discussion These inequalities are both of interest since the range where they are tight is different. Indeed, (10) has a poor dependence on c5 which makes it deteriorate when high confidence is sought for. However, (9) can give high confidence bounds but will be looser when A is small. Moreover, results exposed by Evgeniou et al. [6] indicate that the optimal dependence of A with m is obtained for Am = 0 (In In m). If we plug this into the above bounds, we can notice that (9) does not converge as m -t 00. It may be conjectured that the poor estimation of the variance coming from the martingale method in McDiarmid's inequality is responsible for this effect, but a finer analysis is required to fully understand this phenomenon. One of the interests of these results is to provide a mean for choosing the A parameter by minimizing the right hand side of the inequality. Usually, it is determined with a validation set: some of the data is not used during learning and A is chosen such that the error of fs over the validation set is minimized. The drawback of this approach is to reduce the amount of data available for learning. 5 Conclusion and future work We have presented a new approach to get bounds on the generalization performance of learning algorithms which makes use of specific properties of these algorithms. The bounds we obtain do not depend on the complexity of the hypothesis class but on a measure of how stable the algorithm's output is with respect to changes in the training set. Although this work has focused on regression, we believe that it can be extended to classification, in particular by making the stability requirement less demanding (e.g. stability in average instead of uniform stability). Future work will also aim at finding other algorithms that are stable or can be appropriately modified to exhibit the stability property. At last, a promising application of this work could be the model selection problem where one has to tune parameters of the algorithms (e.g. A and parameters of the kernel for regularization networks). Instead of using cross-validation, one could measure how stability is influenced by the various parameters of interest and plug these measures into theorem 2 to derive bounds on the generalization error. Acknowledgments We would like to thank G. Lugosi, S. Boucheron and O. Chapelle for interesting discussions on stability and concentration inequalities. Many thanks to A. Smola and to the anonymous reviewers who helped improve the readability. References [1] N. Alon, S. Ben-David, N. Cesa-Bianchi, and D. Haussler. Scale-sensitive dimensions, uniform convergence and learnability. Journal of the ACM, 44(4):615- 631, 1997. [2] N. Aronszajn. Theory ofreproducing kernels. Trans. Amer. Math. Soc. , 68:337- 404, 1950. [3] M. Atteia. Hilbertian Kernels and splines functions. Studies in computational mathematics 4. North-Holland, 1992. [4] L.P. Devroye and T.J. Wagner. Distribution-free performance bounds for potential function rules. IEEE Trans. on Information Theory, 25(5):202- 207, 1979. [5] A. Elisseeff. A study about algorithmic stability and its relation to generalization performances. Technical report, Laboratoire ERIC, Univ. Lyon 2, 2000. [6] T. Evgeniou, M. Pontil, and T. Poggio. A unified framework for regularization networks and support vector machines. Technical Memo AIM-1654, Massachusetts Institute of Technology, Artificial Intelligence Laboratory, December 1999. [7] Y. Freund. Self bounding learning algorithms. In Proceedings of the 11 th Annual Conference on Computational Learning Theory (COLT-9S), pages 247- 258, New York, July 24- 26 1998. ACM Press. [8] M. Kearns and D. Ron. Algorithmic stability and sanity-check bounds for leave-oneout cross-validation. Neural Computation, 11(6):1427- 1453, 1999. [9] C. McDiarmid. On the method of bounded differences. In Surveys in Combinatorics, pages 148- 188. Cambridge University Press, Cambridge, 1989. [10] T . Poggio and F. Girosi. Regularization algorithms for learning that are equivalent to multilayer networks. Science, 247:978- 982, 1990. [11] J. Shawe-Taylor, P. L. Bartlett, R. C. Williamson, and M. Anthony. A framework for structural risk minimization. In Proc. 9th Annu. Conf. on Comput. Learning Theory, pages 68- 76. ACM Press, New York, NY, 1996. [12] J. Shawe-Taylor and R. C. Williamson. Generalization performance of classifiers in terms of observed covering numbers. In Paul Fischer and Hans Ulrich Simon, editors, Proceedings of the 4th European Conference on Computational Learning Theory (Eurocolt-99), volume 1572 of LNAI, pages 274- 284, Berlin, March 29- 31 1999. Springer.
2000
41
1,841
The Kernel Trick for Distances Bernhard SchOikopf Microsoft Research 1 Guildhall Street Cambridge, UK bs@kyb.tuebingen.mpg.de Abstract A method is described which, like the kernel trick in support vector machines (SVMs), lets us generalize distance-based algorithms to operate in feature spaces, usually nonlinearly related to the input space. This is done by identifying a class of kernels which can be represented as norm-based distances in Hilbert spaces. It turns out that common kernel algorithms, such as SVMs and kernel PCA, are actually really distance based algorithms and can be run with that class of kernels, too. As well as providing a useful new insight into how these algorithms work, the present work can form the basis for conceiving new algorithms. 1 Introduction One of the crucial ingredients of SVMs is the so-called kernel trick for the computation of dot products in high-dimensional feature spaces using simple functions defined on pairs of input patterns. This trick allows the formulation of nonlinear variants of any algorithm that can be cast in terms of dot products, SVMs being but the most prominent example [13, 8]. Although the mathematical result underlying the kernel trick is almost a century old [6], it was only much later [1, 3,13] that it was made fruitful for the machine learning community. Kernel methods have since led to interesting generalizations of learning algorithms and to successful real-world applications. The present paper attempts to extend the utility of the kernel trick by looking at the problem of which kernels can be used to compute distances in feature spaces. Again, the underlying mathematical results, mainly due to Schoenberg, have been known for a while [7]; some of them have already attracted interest in the kernel methods community in various contexts [11, 5, 15]. Let us consider training data (Xl, yd, ... , (xm, Ym) E X x y. Here, Y is the set of possible outputs (e.g., in pattern recognition, {±1}), and X is some nonempty set (the domain) that the patterns are taken from. We are interested in predicting the outputs y for previously unseen patterns x. This is only possible if we have some measure that tells us how (x, y) is related to the training examples. For many problems, the following approach works: informally, we want similar inputs to lead to similar outputs. To formalize this, we have to state what we mean by similar. On the outputs, similarity is usually measured in terms of a loss function. For instance, in the case of pattern recognition, the situation is simple: two outputs can either be identical or different. On the inputs, the notion of similarity is more complex. It hinges on a representation of the patterns and a suitable similarity measure operating on that representation. One particularly simple yet surprisingly useful notion of (dis)similarity the one we will use in this paper derives from embedding the data into a Euclidean space and utilizing geometrical concepts. For instance, in SVMs, similarity is measured by dot products (i.e. angles and lengths) in some high-dimensional feature space F . Formally, the patterns are first mapped into Fusing ¢ : X -t F, x I-t ¢(x), and then compared using a dot product (¢(x), ¢(X')). To avoid working in the potentially high-dimensional space F, one tries to pick a feature space in which the dot product can be evaluated directly using a nonlinear function in input space, i.e. by means of the kernel trick k(x, x') = (¢(x), ¢(X')). (1) Often, one simply chooses a kernel k with the property that there exists some ¢ such that the above holds true, without necessarily worrying about the actual form of ¢ already the existence of the linear space F facilitates a number of algorithmic and theoretical issues. It is well established that (1) works out for Mercer kernels [3, 13], or, equivalently, positive definite kernels [2, 14]. Here and below, indices i and j by default run over 1, ... , m. Definition 1 (Positive definite kernel) A symmetric function k : X x X -t IR which for all mEN, Xi E X gives rise to a positive definite Gram matrix, i.e. for which for all Ci E IR we have ""~. CicjKij ~ 0, where Kij := k(Xi, Xj), L...J l ,J=1 is called a positive definite (pd) kernel. (2) One particularly intuitive way to construct a feature map satisfying (1) for such a kernel k proceeds, in a nutshell, as follows (for details, see [2]): 1. Define a feature map ¢ : X -t IRA:', X I-t k(., x). Here, IRA:' denotes the space of functions mapping X into Ilt 2. Turn it into a linear space by forming linear combinations m m' i=1 j=1 (3) (4) 3. Endow it with a dot product (1, g) := 2::1 2:;~1 ai/Jjk(xi,xj), and turn it into a Hilbert space Hk by completing it in the corresponding norm. Note that in particular, by definition ofthe dot product, (k(., x), k(., x')) = k(x, x'), hence, in view of (3), we have k(x,x') = (¢(X),¢(X')), the kernel trick. This shows that pd kernels can be thought of as (nonlinear) generalizations of one of the simplest similarity measures, the canonical dot product (x, x'), x, x' E IRN. The question arises as to whether there also exi st generalizations of the simplest dissimilarity measure, the di stance Ilx - x'11 2 . Clearly, the distance 11¢(x) - ¢(X') 112 in the feature space associated with a pd kernel k can be computed using the kernel trick (1) as k(x, x) + k(X', x') - 2k(x, x') . Positive definite kernels are, however, not the full story: there exists a larger class of kernels that can be used as generalized distances, and the following section will describe why. 2 Kernels as Generalized Distance Measures Let us start by considering how a dot product and the corresponding distance measure are affected by a translation of the data, x I-t x - Xo. Clearly, Ilx - x'11 2 is translation invariant while (x, x') is not. A short calculation shows that the effect of the translation can be expressed in terms of II. - .11 2 as ((x - xo), (x' - xo)) = ~ (-llx - x/11 2 + Ilx - xol12 + Ilxo - X'W) . (5) Note that this is, just like (x,x/), still a pd kernel: ~i,j CiCj((Xi - xo), (Xj - xo)) = II ~i Ci(Xi - xo)112 ~ O. For any choice of Xo E X, we thus get a similarity measure (5) associated with the dissimilarity measure Ilx - x'II. This naturally leads to the question whether (5) might suggest a connection that holds true also in more general cases: what kind of nonlinear dissimilarity measure do we have to substitute instead of II. - .112 on the right hand side of (5) to ensure that the left hand side becomes positive definite? The answer is given by a known result. To state it, we first need to define the appropriate class of kernels. Definition 2 (Conditionally positive definite kernel) A symmetric function k : X x X -t IR which satisfies (2) for all mEN, Xi E X and for all Ci E IR with ~~ Ci = 0, L...t=l is called a conditionally positive definite (cpd) kernel. (6) Proposition 3 (Connection pd cpd [2]) Let Xo E X, and let k be a symmetric kernel on X x X. Then k(x, x') := ~ (k(x, x') - k(x, xo) - k(xo, x') + k(xo, xo)) is positive definite if and only if k is conditionally positive definite. The proof follows directly from the definitions and can be found in [2]. (7) This result does generalize (5): the negative squared distance kernel is indeed cpd, for ~i Ci = 0 implies ~i,j cicjllxi - xjl12 = - ~i Ci ~j Cj IIxjl12 - ~j Cj ~i cillxil12 + 2 ~i,j CiCj (Xi, Xj) = 2 ~i,j CiCj (Xi, Xj) = 211 ~i CiXi 112 ~ O. In fact, this implies that all kernels of the form k(x, x') = -llx - x/II/3, 0 < f3 ~ 2 are cpd (they are not pd), by application of the following result: (8) Proposition 4 ([2]) If k : X x X -t] - 00,0] is cpd, then so are - (_k)O< (0 < Q < 1) and -log(l - k). To state another class of cpd kernels that are not pd, note first that as trivial consequences of Definition 2, we know that (i) sums of cpd kernels are cpd, and (ii) any constant b E IR is cpd. Therefore, any kernel of the form k + b, where k is cpd and b E IR, is also cpd. In particular, since pd kernels are cpd, we can take any pd kernel and offset it by b and it will still be at least cpd. For further examples of cpd kernels, cf. [2, 14, 4, 11]. We now return to the main flow of the argume~t. Proposition 3 allows us to construc5 the feature map for k from that of the pd kernel k. To this end, fix Xo E X and define k according to (7). Due to Proposition 3, k is positive definite. Therefore, we may employ the Hilbert space representation ¢ : X -t H of k (ct. (1», satisfying (¢(x), ¢(X')) = k(x, x'), hence 11¢(x) - ¢(x' )112 = (¢(x) - ¢(X'), ¢(x) - ¢(X')) = k(x, x) + k(X', x') - 2k(x, x'). (9) Substituting (7) yields 1 114>(x) - 4>(x' )112 = -k(x, x') + 2 (k(x, x) + k(X', x')) . (10) We thus have proven the following result. Proposition 5 (Hilbert space representation of cpd kernels [7, 2]) Let k be a realvalued conditionally positive definite kernel on X, satisfying k(x, x) = 0 for all x E X. Then there exists a Hilbert space H of real-valued functions on X, and a mapping 4> : X -t H, such that 114>(x) - 4>(x' )112 = -k(x, x'). lfwe drop the assumption k(x, x) = 0, the Hilbert .space representation reads 1 114>(x) - 4>(x' )112 = -k(x, x') + 2 (k(x, x) + k(X', x')) . (11) (12) It can be shown that if k(x, x) = 0 for all x E X, then d(x, x') := J -k(x, x') = 114>(x) - 4>(x' )11 is a semi-metric; it is a metric if k(X,X') f:. 0 for x f:. x' [2]. We next show how to represent general symmetric kernels (thus in particular cpd kernels) as symmetric bilinear forms Q in feature spaces. This generalization of the previously known feature space representation for pd kernels comes at a cost: Q will no longer be a dot product. For our purposes, we can get away with this. The result will give us an intuitive understanding of Proposition 3: we can then write k as k(X,X') := Q(4)(x) 4>(xo), 4>(x' ) - 4>(xo)). Proposition 3 thus essentially adds an origin in feature space which corresponds to the image 4>(xo) of one point Xo under the feature map. For translation invariant algorithms, we are always allowed to do this, and thus turn a cpd kernel into a pd one in this sense, cpd kernels are "as good as" pd kernels. Proposition 6 (Vector space representation of symmetric kernels) Let k be a realvalued symmetric kernel on X. Then there exists a linear .space H of real-valued functions on X , endowed with a symmetric bilinear form Q(., .), and a mapping 4> : X -t H, such that k(x, x') = Q(4)(x), 4>(x' )). (13) Proof The proof is a direct modification of the pd case. We use the map (3) and linearly complete the image as in (4). Define Q(f,g) := L:l LT~1 ad3j k(xi, xj). To see that it is well-defined, although it explicitly contains the expansion coefficients (which need not be unique), note that Q(f, g) = LT~1 /3jf(xj), independent of the ai. Similarly, for g, note that Q(f, g) = Li aig(xi), hence it is independent of /3j. The last two equations also show that Q is bilinear; clearly, it is symmetric. • Note, moreover, that by definition of Q, k is a reproducing kernel for the feature space (which is not a Hilbert space): for all functions f (4), we have Q(k(.,x),f) = f(x); in particular, Q(k(., x), k(., x')) = k(x, x'). Rewriting k as k(x, x') := Q(4)(x) - 4>(xo), 4>(x') - 4>(xo)) suggests an immediate generalization of Proposition 3: in practice, we might want to choose other points as origins in feature space points that do not have a preimage Xo in input space, such as (usually) the mean of a set of points (cf. [12]). This will be useful when considering kernel PCA. Crucial is only that our reference point's behaviour under translations is identical to that of individual points. This is taken care of by the constraint on the sum of the Ci in the following proposition. The asterisk denotes the complex conjugated transpose. Proposition 7 (Exercise 2.23, [2]) Let K be a symmetric matrix, e E ~m be the vector of all ones, J the m x m identity matrix, and let c E em satisfy e*c = 1. Then K := (J - ec*)K(J - ce*) (14) is positive definite if and only if K is conditionally positive definite. Proof "~": suppose K is positive definite, i.e. for any a E em, we have o ~ a* Ka = a* Ka + a*ec* Kce*a - a* Kce*a - a*ec* Ka. (15) In the case a*e = e*a = 0 (cf. (6», the three last terms vanish, i.e. 0 ~ a* Ka, proving that K is conditionally positive definite. "¢=": suppose K is conditionally positive definite. The map (J - ce*) has its range in the orthogonal complement of e, which can be seen by computing, for any a E em, e*(J - ce*)a= e*a-e*ce*a = O. (16) Moreover, being symmetric and satisfying (J - ce*)2 = (J - ce*), the map (J - ce*) is a projection. Thus K is the restriction of K to the orthogonal complement of e, and by definition of conditional positive definiteness, that is precisely the space where K is positive definite. • This result directly implies a corresponding generalization of Proposition 3: Proposition 8 (Adding a general origin) Let k be a symmetric kernel, Xl, ... ,Xm E X, and let Ci E e satisfy E~l Ci = 1. Then (17) is positive definite if and only if k is conditionally positive definite. Proof Consider a set of points x~, . . . , x~" m' E N, x~ EX, and let K be the (m + m') x (m + m') Gram matrix based on Xl, .. . , Xm , X~, ... , x~,. Apply Proposition 7 using cm +! = ... = cm +m ' = O. • Example 9 (SVMs and kernel peA) (i) The above results show that conditionally positive definite kernels are a natural choice whenever we are dealing with a translation invariant problem, such as the SVM: maximization of the margin of separation between two classes of data is independent of the origin '.I' position. Seen in this light, it is not surprising that the structure of the dual optimization problem (cf [13}) allows cpd kernels: as noticed in [11, 10}, the constraint E~l QiYi = 0 projects out the same sub.lpace as (6) in the definition of conditionally positive definite kernels. (ii) Another example of a kernel algorithm that works with conditionally positive definite kernels is kernel peA [9}, where the data is centered, thus removing the dependence on the origin infeature .Ipace. Formally, this follows from Proposition 7 for Ci = 11m. Example 10 (Parzen windows) One of the simplest distance-based classification algorithms conceivable proceeds as follows. Given m+ points labelled with + 1, m_ points labelled with -1, and a test point ¢( x), we compute the mean squared distances between the latter and the two classes, and assign it to the one where this mean is smaller, We use the distance kernel trick (Proposition 5) to express the decision function as a kernel expansion in input space: a short calculation shows that y = sgn (_1_ L k(X,Xi) - _1_ L k(X,Xi) + c) , m+ Yi=l m_ Yi=-l (19) with the constant offset c = (1/2m_) L:Yi=-l k(Xi, Xi) - (1/2m+) L:Yi=l k(Xi, Xi). Note thatfor some cpd kernels, such as (8), k(Xi, Xi) is always 0, thus c = O. For others, such as the commonly used Gaussian kernel, k(Xi, Xi) is a nonzero constant, in which case c also vanishes. For normalized Gaussians and other kernels that are valid density models, the resulting decision boundary can be interpreted as the Bayes decision based on two Parzen windows density estimates of the classes; for general cpd kernels, the analogy is a mere formal one. Example 11 (Toy experiment) In Fig. J, we illustrate the finding that kernel peA can be carried out using cpd kernels. We use the kernel (8). Due to the centering that is built into kernel peA (cf Example 9, (ii), and (5)), the case (3 = 2 actually is equivalent to linear peA. As we decrease (3, we obtain increasingly nonlinear feature extractors. Note, moreover, that as the kernel parameter (3 gets smaller, less weight is put on large distances, and we get more localizedfeature extractors (in the sense that the regions where they have large gradients, i.e. dense sets of contour lines in the plot, get more localized). Figure 1: Kernel PCA on a toy dataset using the cpd kernel (8); contour plots of the feature extractors corresponding to projections onto the first two principal axes in feature space. From left to right: (3 = 2,1.5,1,0.5. Notice how smaller values of (3 make the feature extractors increasingly nonlinear, which allows the identification of the cluster structure. 3 Conclusion We have described a kernel trick for distances in feature spaces. It can be used to generalize all distance based algorithms to a feature space setting by substituting a suitable kernel function for the squared distance. The class of kernels that can be used is larger than those commonly used in kernel methods (known as positive definite kernels). We have argued that this reflects the translation invariance of distance based algorithms, as opposed to genuinely dot product based algorithms. SVMs and kernel PCA are translation invariant in feature space, hence they are really both distance rather than dot product based. We thus argued that they can both use conditionally positive definite kernels. In the case of the SVM, this drops out of the optimization problem automatically [11], in the case of kernel PCA, it corresponds to the introduction of a reference point in feature space. The contribution of the present work is that it identifies translation invariance as the underlying reason, thus enabling us to use cpd kernels in a much larger class of kernel algorithms, and that it draws the learning community's attention to the kernel trick for distances. Acknowledgments. Part of the work was done while the author was visiting the Australian National University. Thanks to Nello Cristianini, Ralf Herbrich, Sebastian Mika, Klaus Miiller, John Shawe-Taylor, Alex Smola, Mike Tipping, Chris Watkins, Bob Williamson, Chris Williams and a conscientious anonymous reviewer for valuable input. References [1] M. A. Aizerman, E. M. Braverman, and L. 1. Rozonoer. Theoretical foundations of the potential function method in pattern recognition learning. Autom. and Remote Contr. , 25:821- 837, 1964. [2] C. Berg, J.P.R. Christensen, and P. Ressel. Hannonic Analysis on Semigroups. Springer-Verlag, New York, 1984. [3] B. E. Boser, 1. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In D. Haussler, editor, Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, pages 144-152, Pittsburgh, PA, July 1992. ACM Press. [4] F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks architectures. Neural Computation, 7(2):219- 269, 1995. [5] D. Haussler. Convolutional kernels on discrete structures. Technical Report UCSC-CRL-99-1O, Computer Science Department, University of California at Santa Cruz, 1999. [6] J. Mercer. Functions of positive and negative type and their connection with the theory of integral equations. Philos. Trans. Roy. Soc. London, A 209:415-446, 1909. [7] I. J. Schoenberg. Metric spaces and positive definite functions. Trans. Amer. Math. Soc., 44:522- 536, 1938. [8] B. Sch61kopf, C. J. C. Burges, and A. J. Smola. Advances in Kernel Methods Support Vector Learning. MIT Press, Cambridge, MA, 1999. [9] B. SchDlkopf, A. Smola, and K-R. Miiller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299- 1319, 1998. [10] A. Smola, T. FrieB, and B. ScMlkopf. Semiparametric support vector and linear programming machines. In M.S. Keams, S.A. Solla, and D.A. Cohn, editors, Advances in Neural Infonnation Processing Systems 11, pages 585 - 591, Cambridge, MA, 1999. MIT Press. [11] A. Smola, B. SchDlkopf, and K-R. Miiller. The connection between regularization operators and support vector kernels. Neural Networks, 11:637- 649, 1998. [12] W.S. Torgerson. Theory and Methods of Scaling. Wiley, New York, 1958. [13] V. Vapnik. The Nature of Statistical Learning Theory. Springer, N.Y., 1995. [14] G. Wahba. Spline Models for Observational Data, volume 59 of CBMS-NSF Regional Conference Series in Applied Mathematics. SIAM, Philadelphia, 1990. [15] C. Watkins, 2000. personal communication.
2000
42
1,842
Higher-order Statistical Properties Arising from the Non-stationarity of Natural Signals Lucas Parra, Clay Spence Adaptive Signal and Image Processing, Sarnoff Corporation {lparra, cspence} @sarnofJ. com Paul Sajda Department of Biomedical Engineering, Columbia University ps629@columbia. edu Abstract We present evidence that several higher-order statistical properties of natural images and signals can be explained by a stochastic model which simply varies scale of an otherwise stationary Gaussian process. We discuss two interesting consequences. The first is that a variety of natural signals can be related through a common model of spherically invariant random processes, which have the attractive property that the joint densities can be constructed from the one dimensional marginal. The second is that in some cases the non-stationarity assumption and only second order methods can be explicitly exploited to find a linear basis that is equivalent to independent components obtained with higher-order methods. This is demonstrated on spectro-temporal components of speech. 1 Introduction Recently, considerable attention has been paid to understanding and modeling the non-Gaussian or "higher-order" properties of natural signals, particularly images. Several non-Gaussian properties have been identified and studied. For example, marginal densities of features have been shown to have high kurtosis or "heavy tails", indicating a non-Gaussian, sparse representation. Another example is the "bow-tie" shape of conditional distributions of neighboring features, indicating dependence of variances [11]. These non-Gaussian properties have motivated a number of image and signal processing algorithms that attempt to exploit higher-order statistics of the signals, e.g., for blind source separation. In this paper we show that these previously observed higher-order phenomena are ubiquitous and can be accounted for by a model which simply varies the scale of an otherwise stationary Gaussian process. This enables us to relate a variety of natural signals to one another and to spherically invariant random processes, which are well-known in the signal processing literature [6, 3]. We present analyses of several kinds of data from this perspective, including images, speech, magneto encephalography (MEG) activity, and socio-economic data (e.g., stock market data). Finally we present the results of experiments with algorithms for finding a linear basis equivalent to independent components that exploit non-stationarity so as to require only 2nd-order statistics. This simplification is possible whenever linearity and non-stationarity of independent sources is guaranteed such as for the powers of acoustic signals. 2 Scale non-stationarity and high kurtosis Natural signals can be non-stationary in various ways, e.g. varying powers, changing correlation of neighboring samples, or even non-stationary higher moments. We will concentrate on the simplest possible variation and show in the following sections how it can give rise to many higher-order properties observed in natural signals. We assume that at any given instance a signal is specified by a probability density function with zero mean and unknown scale or power. The signal is assumed nonstationary in the sense that its power varies from one time instance to the next. 1 We can think of this as a stochastic process with samples z(t) drawn from a zero mean distribution Pz(z) with samples possibly correlated in time. We observe a scaled version of this process with time varying scales s(t) > ° sampled from Ps(s), x(t) = s(t)z(t) , (1) The observable process x(t) is distributed according to Px(x) = (OOdsPs(s)Px(xls) = r XJ dsps(s) S-l Pz(~). 10 10 s (2) We refer to px(x) as the long-term distribution and pz(z) as the instantaneous distribution. In essence Px (x) is a mixture distribution with infinitely many kernels S-lpz(~). We would like to relate the sparseness of Pz(z), as measured by the kurtosis, to the sparseness of the observable distribution Px(x). Kurtosis is defined as the ratio between the fourth and second cumulant of a distribution [7]. As such it measures the length of the distribution's tails, or the sharpness of its mode. For a zero mean random variable x this reduces up to a constant to K[x] = ~::;! ,with (f(x)x = f dxf(x)px(x). (3) In this case we find that the kurtosis of the long-term distribution is always larger than the kurtosis of the instantaneous distribution unless the scale is stationary ([9] and [1] for symmetric pz(z)), K[x] ~K[z]. (4) To see this note that the independence of sand z implies, (xn)x = (sn)s (zn)z, and therefore, K[x] = K[z] (S4)s / (S2)~. From the inequality, ((S2 C2)2)s ~ 0, which hold for any arbitrary constant c> 0, it is easy to show that (S4) s ~ (S2)~, where the equality holds for Ps(s) = 8(s - c). Together this leads to inequality (4), which states that for a fixed scale s(t), i.e. the magnitude of the signal is stationary, the kurtosis will be minimal. Conversely, non-stationary signals, defined as a variable scaling of an otherwise stationary process, will have increased kurtosis. IThroughout this paper we will refer to signals that are sampled in time. Note that all the arguments apply equally well to a spatial rather than temporal sampling, that is, images rather than time series. -1 -1 - 2 - 2 - 3 -3 -4 - 5 - 5 -2 -2 - 2 -2 Figure 1: Marginal distributions within 3 standard deviations are shown on a logarithmic scale; left to right: natural image features, speech sound intensities, stock market variation, MEG alpha activity. The measured kurtosis is 4.5, 16.0, 12.9, and 5.3 respectively. On top the empirical histograms are presented and on bottom the model distributions. The speech data has been fit with a Meijer-G function G5g [3]. For the MEG activity, the stock market data and the image features a mixture of zero mean Gaussians was used. Figure 1 shows empirical plots of the marginal distributions for four natural signals; image, speech, stock market, and MEG data. As image feature we used a wavelet component for a 162x162 natural texture image of sand (presented in [4]). Selfinverting wavelets with a down-sampling factor of three where used. The speech signal is a 2.3 s recording of a female speaker sampled at 8 kHz with a noise level less than -25 dB. The signal has been band limited between 300 Hz and 3.4 kHz corresponding to telephone speech. The market data are the daily closing values of the NY Stock exchange composite index from 02/01/1990 to 04/28/2000. We analyzed the variation from the one day linear prediction value to remove the upwards trend of the last decade. The MEG data is band-passed (10-12 Hz) alpha activity of a independent component of 122 MEG signals. This independendt component exhibits alpha de-synchronization for a visio-motor integration task [10]. One can see that in all four cases the kurtosis is high relative to a Gaussian (K = 3). Our claim is that for natural signals, high kurtosis is a natural result of the scale non-stationarity of the signal. Additional evidence comes from the behavior seen in the conditional histograms of the joint distributions, presented in the next section. 3 Higher-order properties of joint densities It has been observed in images that the conditional histograms of joint densities from neighboring features (neighboring in scale, space, and/or orientation) exhibit variance dependencies that cannot be accounted for by simple second-order models [11]. Figure 2 shows empirical conditional histograms for the four types of natural signals we considered earlier. One can see that speech and stock-market data exhibit the same variance dependency or "bow-tie" shape exhibited by images. -2 -2 -2 - 2 -2 -2 -2 -2 Figure 2: (Top) Empirical conditional histograms and (bottom) model conditional density derived from the one dimensional marginals presented in the previous figure assuming the data is sampled form a SIRP. Good correspondence validates the SIRP assumption which is equivalent to our non-stationary scale model for slow varying scales. The model of Equation 1 can easily account for this observation if we assume slowly changing scales s(t). A possible explanation is that neighboring samples or features exhibit a common scale. If two zero mean stochastic variables are scaled both with the same factors their magnitude and variance will increase together. That is, as the magnitudes of one variable increase so will the magnitude and the variance of the other variable. This results in a broadening of the histogram of one variable as one increases the value of the conditioning variable resulting in a "bow-tie" shaped conditional density. 4 Relationship to spherical invariant random process A closely related class of signals to those in Equation 1 is the so-called Spherical Invariant Random Process (SIRP). If the signals are short time Gaussian and the powers vary slowly the class of signals described are approximately SIRPs. Despite the restriction to Gaussian distributed z SIRPs have been shown to be a good model for a range of stochastic processes with very different higher-order properties, depending on the scale distributions Ps (s). They have been used in a variety of signal processing applications [6]. Band-limited speech, in particular, has been shown to be well described by SIRPs [3]. If z is multidimensional, such as a window of samples in a time series or a multi-dimensional feature vector, one talks about Spherically Invariant Random Vectors SIRVs. Natural images have been modeled by what in essence is closely related to SIRV s a infinite mixture of zero mean Gaussian features [11]. Similar models have also been used for financial time series [2]. The fundamental property of SIRPs is that the joint distribution of a SIRP is entirely defined by a univariate characteristic function Cx(u) and the covariance ~ of neighboring samples [6]. They are directly related to our scale-non-stationarity model through a theorem by Kingman and Yao which states that any SIRP is equivalent to a zero mean Gaussian process z(t) with an independent stochastic scale s. Furthermore the univariate characteristic function Cx(u) specifies Ps(s) and the 1D marginal Px(x) and visa versa [6]. From the characteristic function Cx(u) and the covariance 1; one can also construct all higher dimensional joint densities. This leads to the following relation between the marginal densities of various orders [3], Pn(x) = 7r-n/2 fn(xT1;-lx), with x E IRn , and 1; = (xxT), (5) ) d ) -1/2 fOO 2) fn+2(S = - dsfn(s , hm(s) = 7r _oohm+1(s + y dy (6) In particular these relations allow us to compute the joint density P2(X(t), x(t + 1)) from an empirically estimated marginal density Pi (x(t)) and the covariance of x(t) and x(t+ 1). Comparing the resulting 2D joint density to the observed joint density allows to us verify the assumption that the data is sampled from a SIRP. In so doing we can more firmly assert that the observed two dimensional joint histograms can in fact be explained as a Gaussian process with a non-stationary scale. If we use zero mean Gaussian mixtures, p1(X) = L~lmiexp(-x2/uT), as the 1D model distribution the resulting 2D joint distribution is simply Pn(x) = L~l mi exp( -xT1;-lx / uT). If the model density is given by a Meijer-G function, as suggested in [3] with P1(X) = ro1A)G5g(A2X 2IA - 0.5,A - 0.5), then the 2D joint is p2(X) = ~:(A) G~g(A2xT1;-lxl - 0.5; 0, A, A). In both cases it is assumed that the data is normalized to unit variance. Brehm has used this approach to demonstrate that band-limited speech is well described by a SIRP [3]. In addition, we show here that the same is true for the image features and stock market data presented above. The model conditional densities shown in Figure 2 correspond well with the empirical conditional histograms. In particular they exhibit the characteristic bow-tie structure. We emphasize that these model 2D joint densities have been obtained only from the 1D marginal of Figure 1 and the covariance of neighboring samples. The deviations of the observed and model 2D joint distributions are likely due to variable covariance itself, that is, not only does the overall scale or power vary with time, but the components of the covariance matrix vary independently of each other. For example in speech the covariance of neighboring samples is well known to change considerably over time. Nevertheless, the surprising result is that a simple scale non-stationarity model can reproduce the higher-order statistical properties in a variety of natural signals. 5 Spectro-temporallinear basis for speech As an example of the utility of this non-stationarity assumption, we analyze the statistical properties of the powers of a single source, in particular for speech signals. Motivated by the auditory spectro-temporal receptive field reported in [5] and work on receptive fields and independent components we are interested to find a linear basis of independent components in a spectro-temporal window of speech signals. In [9, 8] we show that one can use second order statistic to uniquely recover sources from a mixture provided that the mix is linear and the sources are non-stationary. One can do so by finding a basis that guarantees uncorrelated signals at multiple time intervals (multiple decorrelation algorithm (MDA)). Our present model argues that features of natural signals such as the powers in different frequency bands can be assumed non-stationary, while powers of independent signals are known to add "We had a barbecue over the weekend at my house." PCA MDA ICA-JADE Figure 3: Spectro-temporal representation of speech. One pixel in the horizontal direction corresponds to 16 ms. In the vertical direction 21 Bark scale power bands are displayed. The upper diagram shows the log-powers for a 2.5 s segment of the 200 s recording used to compute the different linear bases. The three lower diagrams show three sets of 15 linear basis components for 2lx8 spectra-temporal segments of the speech powers. The sets correspond to PCA, MDA, and ICA respectively. Note that these are not log-powers, hence the smaller contribution of the high frequencies as compared to the log-power plot on top. linearly. We should be able therefore to identify with second order methods the same linear components as with independent component algorithms where highorder statistical assumptions are invoked. We compute the powers in 21 frequency bands on a Bark scale for short consecutive time intervals. We choose to find a basis for a segment of 21 bands and 8 neighboring time slices corresponding to 128 ms of signal between 0 and 4 kHz. We used half overlapping windows of 256 samples such that for a 8 kHz signal neighboring time slices are 16 ms apart. A set of 7808 such spectro-temporal segments were sampled from 200 s of the same speech data presented previously. Figure 3 shows the results obtained for a subspace of 15 components. One can see that the components obtained with MDA are quite similar to the result of rcA and differ considerably from the principal components. From this we conclude that speech powers can in fact be thought of as a linear combination of non-stationary independent components. In general, the point we wish to make is to demonstrate the strength of secondorder methods when the assumptions of non-stationarity, independence, and linear superposition are met. 6 Conclusion We have presented evidence that several high-order statistical properties of natural signals can be explained by a simple scale non-stationary model. For four types of natural signals, we have shown that a scale non-stationary model will reproduce the high-kurtosis behavior of the marginal densities. Furthermore, for the case of scale non-stationary with Gaussian density (SIRP), we have shown that we can reproduce the variance dependency seen in conditional histograms of the joint density directly from the empirical marginal densities. This leads to the conclusion that a scale nonstationary model (e.g. SIRP) is a good model for these natural signals. We have shown that one can exploit the assumptions of this model to compute a linear basis for natural signals without having to invoke higher order statistically techniques. Though we do not claim that all higher-order properties or all natural signals can be explained by a scale non-stationary model, it is remarkable that such a simple model can account for a variety of the higher-order phenomena and for a variety of signal types. References [1] E.M.L. Beale and C.L. Mallows. Scale mixing of symmetric distributions with zero means. Annals of Mathematical Statitics, 30:1145-1151, 1959. [2] T. P. Bollerslev, R. F. Engle, and D. B. Nelson. Arch models. In R. F. Engle and D. L. McFadden, editors, Handbook of Econometrics, volume IV. NorthHolland, 1994. [3] Helmut Brehm and Walter Stammler. Description and generation of spherically invariant speech-model signals. Signal Processing, 12:119-141, 1987. [4] Phil Brodatz. Textures: A Photographic Album for Artists and Designers. Dover, 1999. [5] R. deCharms, Christopher and M. Merzenich, Miachael. Characteristic neuros in the primary auditory cortex of the awake primate using reverse correlation. In M. Jordan, M. Kearns, and S. Solla, editors, Advances in Neural Information Processing Systems 10, pages 124-130, 1998. [6] Joel Goldman. Detection in the presence of spherically symmetric random vectors. IEEE Transactions on Information Theory, 22(1):52- 59, January 1976. [7] M.G. Kendal and A. Stuart. The Advanced Theory of Statistics. Charles Griffin & Company Limited, London, 1969. [8] L. Parra and C. Spence. Convolutive blind source separation of non-stationary sources. IEEE Trans. on Speech and Audio Processing, pages 320- 327, May 2000. [9] Lucas Parra and Clay Spence. Separation of non-stationary sources. In Stephen Roberts and Richard Everson, editors, Independent Components Analysis: Principles and Practice. Cambridge University Press, 200l. [10] Akaysha Tang, Barak Pearlmutter, Dan Phung, and Scott Carter. Independent components of magnetoencephalography. Neural Computation, submitted. [11] Martin J. Wainwright and Eero P. Simoncelli. Scale mixtures of Gaussians and the statistics of natural images. In S. A. Solla, T.K. Leen, and K.-R. Miiller, editors, Advances in Neural Information Processing Systems 12, pages 855-861, Cambridge, MA, 2000. MIT Press.
2000
43
1,843
Probabilistic Semantic Video Indexing Milind R. Naphade, Igor Kozintsev and Thomas Huang Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign {milind, igor,huang}@ifp.uiuc.edu Abstract We propose a novel probabilistic framework for semantic video indexing. We define probabilistic multimedia objects (multijects) to map low-level media features to high-level semantic labels. A graphical network of such multijects (multinet) captures scene context by discovering intra-frame as well as inter-frame dependency relations between the concepts. The main contribution is a novel application of a factor graph framework to model this network. We model relations between semantic concepts in terms of their co-occurrence as well as the temporal dependencies between these concepts within video shots. Using the sum-product algorithm [1] for approximate or exact inference in these factor graph multinets, we attempt to correct errors made during isolated concept detection by forcing high-level constraints. This results in a significant improvement in the overall detection performance. 1 Introduction Research in video retrieval has traditionally focussed on the paradigm of query-byexample (QBE) using low-level features [2]. Query by keywords/key-phrases (QBK) (preferably semantic) instead of examples has motivated recent research in semantic video indexing. For this, we need models which capture the feature representation corresponding to these keywords. A QBK system can support semantic retrieval for a small set of keywords and also act as the first step in QBE systems to narrow down the search. The difficulty lies in the gap between low-level media features and highlevel semantics. Recent attempts to address this include detection of audio-visual events like explosion [3] and semantic visual templates [4]. We propose a statistical pattern recognition approach for training probabilistic multimedia objects (multijects) which map the high level concepts to low-level audiovisual features. We also propose a probabilistic factor graph framework, which models the interaction between concepts within each video frame as well as across the video frames within each video shot. Factor graphs provide an elegant framework to represent the stochastic relationship between concepts, while the sum-product algorithm provides an efficient tool to perform learning and inference in factor graphs. Using exact as well as approximate inference (through loopy probability propagation) we show that there is significant improvement in the detection performance. 2 Proposed Framework To support retrieval based on high-level queries like' Explosion on a beach', we need models for the event explosion and site beach. User queries might similarly involve sky, helicopter, car-chase etc. Detection of some of these concepts may be possible, while some others may not be directly observable. To support such queries, we proposed a probabilistic multimedia object (multiject) [3] as shown in Figure 1 (a), which has a semantic label and which summarizes a time sequence of features from multiple media. A Multiject can belong to any of the three categories: objects (car, man, helicopter), sites (outdoor, beach), or events (explosion, man-walking). Intuitively it is clear that the presence of certain multijects suggests a high possibility of detecting certain other multijects. Similarly some multijects are less likely to occur in the presence of others. The detection of sky and water boosts the chances of detecting a beach, and reduces the chances of detecting Indoor. It might also be possible to detect some concepts and infer more complex concepts based on their relation with the detected ones. Detection of human speech in the audio stream and a face in the video stream may lead to the inference of human talking. To integrate all the multijects and model their interaction, we propose the network of multijects which we term as multinet. A conceptual figure of a multinet is shown in Figure 1 (b) with positive (negative) signs indicating positive (negative) interaction. In P (c:oncept=Outdoor I :flBtures, other mut~ijectll) = 0.7 7\ v1deo audio featur.. featur •• (a) (b) Figure 1: (a) A probabilistic multimedia object. (b) A conceptual multinet. Section 5 we present a factor graph multinet implementation. 3 Video segmentation and Feature Extraction We have digitized movies of different genres to create a large database of a few hours of video data. The video clips are segmented into shots using the algorithm in [5]. We then perform spatio-temporal segmentation [2] within each shot to obtain and track regions homogeneous in color and motion separated by strong edges. Large dominant regions are labeled manually. Each region is then processed to extract features characterizing the color (3-channel histogram [3]), texture (statistical properties of the Gray-level Co-occurrence matrices at 4 different orientations [6]), structure (edge direction histogram [7]), motion (affine motion parameters) and shape (moment invariants [8]). Details about the extracted features can be found in [9]. For sites we use color, texture and structural features (84 elements) and for objects and events we use all features (98 elements)l . Audio features are extracted as in [10]. For training our multiject and multinet models we use 1800 frames from different video shots and for testing our framework we use 9400 frames. Since consecutive images within a shot are correlated, the video data is subsampled to create the training and testing without redundancy. 4 Modeling semantic concepts using Multijects We use an identical approach to model concepts in video and audio (independently and jointly). The following site multijects are used in our experiments: sky, water, forest, rocks and snow. Audio-only multijects (human-speech, music) can be found in [10] and audio-visual multijects (explosion) in [3]. Detection of multijects is performed on every segmented region2 within each video frame. Let the feature vector for the region j be Xj . We model the semantic concept as a binary random variable and define the two hypotheses Ho and Hl as (1) where Po(Xj ) and PdXj) denote the class conditional probability density functions conditioned on the null hypothesis (concept absent) and the true hypothesis (concept present). Po (Xj) and P1 (Xj) are modeled using a mixture of Gaussian components for the site multijects3 . For objects and events (in video and audio), hidden Markov models replace the Gaussian mixture models and feature vectors for all the frames within a shot constitute to the time series modeled. The detection performance for the five site multijects on the test-set is given in Table 1. multiject Rocks Sky Snow Water Forest Detection (%) 77 81.8 81.5 79.4 85.1 False Alarm (%) 24.1 11.9 12.9 15.6 14.9 Table 1: Maximum likelihood binary classification performance for site multijects. 4.1 Frame level semantic features Since multijects are used as semantic feature detectors at a regional level, it is easy to define multiject-based semantic features at the frame level by integrating the region-level classification. We check each region for each concept individually and obtain probabilities of each concept being present or absent in the region. Imperfect segmentation does not hurt us too much since these soft decisions are modified in the multinet based on high-level constraints. Defining a binary random variable Rij (Rij = 1/0 if concept present/absent) and assuming uniform priors on the presence or absence of a concept in any region we can use Bayes' rule to obtain: P(Rij = 11Xj) = P(XjlRij = l)/(P(Xj IRij = 1) +P(XjIRij = 0)) (2) Defining binary random variables Fi , i E {I, N} (N is the number of concepts) to take on value 1 if concept i is present in the frame and value 0 otherwise, we use the I Automatic feature selection is not addressed here. 2We thank Prof. Chang and D. Zhong for the algorithm [2]. 3Po(Xj ) used 5 gaussian components, while PI(Xj ) used 10. The number of mixing components can be fixed experimentally and could be different for optimal performance. In general models for Ho are represented better with more components than those for HI OR function to combine soft decisions for each concept from all regions to obtain Fi . Let X = {Xl, ... , X~} (M is the number of regions in a frame), then j=M P(Fi = OIX) = II P(Rij = 0IXj ) and P(Fi = 11X) = 1- P(Fi = OIX) (3) j=l 5 The multinet as a factor graph To model the interaction between multijects in a multinet, we propose to use a factor graph [1] framework. Factor graphs subsume graphical models like Bayesian nets and Markov random fields and have been successfully applied in the area of channel error correction coding [1] and specifically, iterative decoding. Let x = {Xl, X2, ... , Xn} be a vector of variables. A factor graph visualizes the factorization of a global function f(x). Let f(x) factor as (4) i=l where x( i) is the set of variables of the function fi. A factor graph for f is defined as the bipartite graph with two vertex classes Vf and Vv of sizes m and n respectively such that the ith node in Vf is connected to the jth node in Vv iff fi is a function of Xj. Figure 2 (a) shows a simple factor graph representation of f(x,y,z) = h(x,y)h(y,z) with function nodes h,h and variable nodes X,y,z. Many signal processing and learning problems are formulated as optimizing a global function f(x) marginalized for a subset of its arguments. The algorithm which allows us to perform this efficiently, though in most cases only approximately, is called the sum-product algorithm. The sum-product algorithm works by computing messages at the nodes using a simple rule and then passing the messages between nodes according to a reasonable schedule. A message from a function node to a variable node is the product of all messages incoming to the function node with the function itself, marginalized for the variable associated with the variable node. A message from a variable node to a function node is simply the product of all messages incoming to the variable node from other functions connected to it. Pearl's probability propagation working on a Bayesian net is equivalent to the sum-product algorithm applied to the corresponding factor graph. If the factor graph is a tree, exact inference is possible using a single set of forward and backward passage of messages. For all other cases inference is approximate and the message passing is iterative [1] leading to loopy probability propagation. This has a direct bearing on our problem because relations between semantic concepts are complicated and in general contain numerous cycles (e.g., see Figure 1 (b)). 5.1 Relating semantic concepts in a factor graph We now describe a frame-level factor graph to model the probabilistic relations between various frame-level semantic features Fi obtained using Equation 3. To capture the co-occurrence relationship between the five semantic concepts at the frame-level, we define a function node which is connected to the five variable nodes representing the concepts as shown in Figure 2 (b). This function node represents P(F1' F2 , F3 , .. , FN). The function nodes below the five variable nodes denote the messages passed by the OR function of Equation 3 (P(Fi = 1), P(Fi = 0)). These are then propagated to the function node. At the function node the messages are multiplied by the function which is estimated from the co-occurrence of the concepts in the training set. The function node then sends back messages summarized for each variable. This modifies the soft decisions at the variable nodes according to the high-level relationship between the five concepts. In general, the distribution I omtdenslty functlon of 5 semantic concepts FusIOn at imme kollel uSing OR functton FusIon at frame levd uSing OR functIOn (, ) (b) (0) Figure 2: (a) An example of a simple factor graph (b)A multinet: Accounting for concept dependencies using a single function (b) Another multinet: Replacing the function in (b) by a product of 10 local functions. at the function node in Figure 2 (b) is exponential in the number of concepts (N) and the computational cost may increase quickly. To alleviate this we can enforce a factorization of the function in Figure 2 (b) as a product of a set of local functions where each local function accounts for co-occurrence of two variables only. This modification to the graph in Figure 2 (b) is shown in Figure 2 (c). Each function in Figure 2 (c) represents the joint probability mass of those two variables that are its arguments (and there are eli such functions) thus reducing the complexity. The factor graph is no longer a tree and exact inference becomes hard as the number of loops grows. We then apply iterative techniques based on the sum-product algorithm to overcome this. We can also incorporate temporal Multinet A dynamic multi net with unfactored global distribution for each frame ,Multinet slate at Accounting for temporal dependency using a Markov chain (a) Multinet state al frame 1-1 A dynamic multinet with factored global distribution for each frame ! Mullmet stale at framel Accounting for temporal dependency using a Markov chain (b) Figure 3: (a) Replicating the multinet in Figure 2 (b) for each frame in a shot and introducing temporal dependencies between the value of each concept in consecutive frames. (b) Repeating this for Figure 2 (c). dependencies. This can be done by replicating the slice of factor graph in Figure 2 (b) or (c) as many times as the number of frames within a single video shot and by introducing a first order Markov chain for each concept. Figures 3 (a) and (b) show two consecutive time slices and extend the models in Figures 2 (b) and (c) respectively. The horizontal links in Figures 3 (a), (b) connect the variable node for each concept in a time slice to the corresponding variable node in the next time slice through a function modeling the transition probability. This framework now becomes a dynamic probabilistic network. For inference, messages are iteratively passed locally within each slice. This is followed by message passing across the time slices in the forward direction and then in the backward direction. Accounting for temporal dependencies thus leads to temporal smoothing of the soft decisions within each shot. 6 Results We compare detection performance of the multijects with and without accounting for the concept dependencies and temporal dependencies. The reference system performs multiject detection by thresholding soft-decisions (i.e., P(Fi IX)) at the frame-level. The proposed schemes are then evaluated by thresholding the soft decisions obtained after message passing using the structures in Figures 2 (b), (c) (conceptual dependencies) and Figures 3 (a), (b) (conceptual and temporal dependencies). We use receiver operating characteristics (ROC) curves which show a plot of the probability of detection plotted against the probability of false alarms for different values of a parameter (the threshold in our case). Figure 4 shows the ROC curves for the overall performance over the test-set across all the five multijects. The three curves in Figure 4 (a) correspond to the performance using isolated frame-level classification, the factor graph in Figure 2 (b) and the factor graph in Figure 2 (c) with ten iterations of loopy propagation. The curves in Figure 4 (b) correspond to isolated detection followed by temporal smoothing, the dynamic multinet in Figure 3 (a) and the one in Figure 3 (b) respectively. From ,,~~~~~~ ,,--~ ,,==,~.~~~~¥=~ ,,~~~~~~ ,,==~ ,,~,~ . ~~~~¥=~ P,obllb4lityoiF.,. AI.ms P, ob.bilityofF.lseAI., ms (a) (b) Figure 4: ROC curves for overall performance using isolated detection and two factor graph representations. (a) With static multinets (b) With dynamic multinets. Figure 4 we observe that there is significant improvement in detection performance by using the multinet to model the dependencies between concepts than without using it. This improvement is especially stark for low Pf where detection rate improves by more than 22 % for a threshold corresponding to Pf = 0.1. Interestingly, detection based on the factorized functions (Figure 2 (c)) performs better than the the one based on the unfactorized function. This suggests that the factorized function is a better representative and can be estimated more reliably due to fewer parameters being involved. Also by using models in Figure 3, which account for temporal dependencies across video frames and by performing smoothing using the forward backward algorithm, we see further improvement in detection performance in Figure 4 (b). The detection rate corresponding to Pf = 0.1 is 68 % for the static multinet (Figure 2 (c)) and 72 % for its dynamic counterpart (Figure 3 (b)). Comparison of ROC curves with and without temporal smoothing (not shown here due to lack of space) reveal that temporal smoothing results in better detection irrespective of the threshold or configuration. 7 Conclusions and Future Research We propose a probabilistic framework for detecting semantic concepts using multijects and multinets. We present implementations of static and dynamic multinets using factor graphs. We show that there is significant improvement in detection performance by accounting for the interaction between semantic concepts and temporal dependency amongst the concepts. The multinet architecture imposes no restrictions on the classifiers used in the multijects and we can improve performance by using better multiject models. Our framework can be easily expanded to integrate multiple modalities if they have not been integrated in the multijects to account for the loose coupling between audio and visual streams in movies. It can also support inference of concepts that are observed not through media features but through their relation to those concepts which are observed in media features. References [1] F. Kschischang, B. Frey, and H.-A. Loeliger, "Factor graphs and the sum-product algorithm," submitted to IEEE Trans. Inform. Theory, July 1998. [2] D. Zhong and S. F. Chang, "Spatio-temporal video search using the object-based video representation," in Proceedings of the IEEE International Conference on Image Processing, vol. 2, Santa Barbara, CA, Oct. 1997, pp. 21-24. [3] M. Naphade, T. Krist jansson, B. Frey, and T . S. Huang, "Probabilistic multimedia objects (multijects): A novel approach to indexing and retrieval in multimedia systems," in Proceedings of the fifth IEEE International Conference on Image Processing, vol. 3, Chicago, IL, Oct 1998, pp. 536-540. [4] S. F. Chang, W. Chen, and H. Sundaram, "Semantic visual templates - linking features to semantics," in Proceedings of the fifth IEEE International Conference on Image Processing, vol. 3, Chicago, IL, Oct 1998, pp. 531- 535. [5] M. Naphade, R. Mehrotra, A. M. Ferman, J. Warnick, T. S. Huang, and A. M. Tekalp, "A high performance shot boundary detection algorithm using multiple cues," in Proceedings of the fifth IEEE International Conference on Image Processing, vol. 2, Chicago, IL, Oct 1998, pp. 884-887. [6] R. Jain, R. Kasturi, and B. Schunck, Machine Vision. MIT Press and McGraw-Hill, 1995. [7] A. K. Jain and A. Vailaya, "Shape-based retrieval: A case study with trademark image databases," Pattern Recognition, vol. 31, no. 9, pp. 1369- 1390, 1998. [8] S. Dudani, K. Breeding, and R. McGhee, "Aircraft identification by moment invariants," IEEE Trans. on Computers, vol. C-26, pp. 39- 45, Jan 1977. [9] M. R. Naphade and T. S. Huang, "A probabilistic framework for semantic indexing and retrieval in video," to appear in IEEE International Conference on Multimedia and Expo, New York, NY, July 2000. http://www.ifp.uiuc.edu;-milind/cpapers.html [10] M. R. Naphade and T. S. Huang, "Stochastic modeling of soundtrack for efficient segmentation and indexing of video," in SPIE IS €3 T Storage and Retrieval for Multimedia Databases, vol. 3972, Jan 2000, pp. 168-176.
2000
44
1,844
Accumulator networks: Suitors of local probability propagation Brendan J. Frey and Anitha Kannan Intelligent Algorithms Lab, University of Toronto, www. cs. toronto. edu/ "-'frey Abstract One way to approximate inference in richly-connected graphical models is to apply the sum-product algorithm (a.k.a. probability propagation algorithm), while ignoring the fact that the graph has cycles. The sum-product algorithm can be directly applied in Gaussian networks and in graphs for coding, but for many conditional probability functions - including the sigmoid function - direct application of the sum-product algorithm is not possible. We introduce "accumulator networks" that have low local complexity (but exponential global complexity) so the sum-product algorithm can be directly applied. In an accumulator network, the probability of a child given its parents is computed by accumulating the inputs from the parents in a Markov chain or more generally a tree. After giving expressions for inference and learning in accumulator networks, we give results on the "bars problem" and on the problem of extracting translated, overlapping faces from an image. 1 Introduction Graphical probability models with hidden variables are capable of representing complex dependencies between variables, filling in missing data and making Bayesoptimal decisions using probabilistic inferences (Hinton and Sejnowski 1986; Pearl 1988; Neal 1992). Large, richly-connected networks with many cycles can potentially be used to model complex sources of data, such as audio signals, images and video. However, when the number of cycles in the network is large (more precisely, when the cut set size is exponential), exact inference becomes intractable. Also, to learn a probability model with hidden variables, we need to fill in the missing data using probabilistic inference, so learning also becomes intractable. To cope with the intractability of exact inference, a variety of approximate inference methods have been invented, including Monte Carlo (Hinton and Sejnowski 1986; Neal 1992), Helmholz machines (Dayan et al. 1995; Hinton et al. 1995), and variational techniques (Jordan et al. 1998). Recently, the sum-product algorithm (a.k.a. probability propagation, belief propagation) (Pearl 1988) became a major contender when it was shown to produce astounding performance on the problem of error-correcting decoding in graphs with over 1,000,000 variables and cut set sizes exceeding 2100,000 (Frey and Kschischang 1996; Frey and MacKay 1998; McEliece et al. 1998). The sum-product algorithm passes messages in both directions along the edges in a graphical model and fuses these messages at each vertex to compute an estimate of P(variablelobs), where obs is the assignment of the observed variables. In a directed (b) ••• Xj (c) Ynj(Xj) Ynj(Xj) ~K(Y) ~K(Y) ••• ZK Figure 1: The sum-product algorithm passes messages in both directions along each edge in a Bayesian network. Each message is a function of the parent. (a) Incoming messages are fused to compute an estimate of P(ylobservations). (b) Messages are combined to produce an outgoing message 7rk(Y). (c) Messages are combined to produce an outgoing message >'j(Xj). Initially, all messages are set to 1. Observations are accounted for as described in the text. graphical model (Bayesian belief network) the message on an edge is a function of the parent of the edge. The messages are initialized to 1 and then the variables are processed in some order or in parallel. Each variable fuses incoming messages and produces outgoing messages, accounting for observations as described below. If Xl, ... , X J are the parents of a variable y and Zl, ... , Z K are the children of y, messages are fused at y to produce function F(y) as follows (see Fig. Ia): F(y) = (IT Ak(y)) (L: ... L: P(yIXI, .. . , xJ) IT 7I"j (Xj)) Rj P(y, obs), (1) k X l X J j where P(y lXI, ... , xJ) is the conditional probability function associated with y. If the graph is a tree and if messages are propagated from every variable in the network to y, as described below, the estimate is exact: F(y) = P(y, obs). Also, normalizing F(y) gives P(ylobs). If the graph has cycles, this inference is approximate. The message 7rk(Y) passed from y to Zk is computed as follows (see Fig. Ib): 7I"k(Y) = F(y)/Ak(Y). (2) The message Aj(Xj) passed from y to Xj is computed as follows (see Fig. Ic): Aj(Xj) = L:L: ... L: L: ... L:(IT Ak(y))P(ylxl, ... ,XJ)(IT 7I"i(Xi)). (3) y X l X; - l Xj+1 XJ k i#j Notice that Xj is not summed over and is excluded from the product of the 7rmessages on the right. If y is observed to have the value y*, messages are modified as follows: the fused result at y and the outgoing 71" F {F(Y) if Y = y* (y) f0 otherwise { 7I"k(Y) if Y = y* , 7I"k (y) f0 otherwise (4) The outgoing A messages are computed as follows: Aj(Xj) = L: ... L: L: ... L:(IT Ak(Y*))P(y = Y*IXI' . . . ,XJ)(IT 7I"i(Xi)). (5) Xl X; - l X;+l XJ k i#j If the graph is a tree, these formulas can be derived quite easily using the fact that summations distribute over products. If the graph is not a tree, a local independence assumption can be made to justify these formulas. In any case, the algorithm computes products and summations locally in the graph, so it is often called the "sum-product" algorithm. '" , , SN ,2 l' ' N,N- ! ' N Figure 2: The local complexity of a richly connected directed graphical model such as the one in (a) can be simplified by assuming that the effects of a child's parents are accumulated by a low-complexity Markov chain as shown in (b). (c) The general structure of the "accumulator network" considered in this paper. 2 Accumulator networks The complexity of the local computations at a variable generally scales exponentially with the number of parents of the variable. For example, fusion (1) requires summing over all configurations of the parents. However, for certain types of conditional probability function P(yIXI,' .. ,xJ), this exponential sum reduces to a linear-time computation. For example, if P(yIXI,' .. ,xJ) is an indicator function for y = Xl XOR X2 XOR ... XOR XJ (a common function for error-correcting coding), the summation can be computed in linear time using a trellis (Frey and MacKay 1998). If the variables are real-valued and P(y lXI, ... ,xJ) is Gaussian with mean given by a linear function of Xl, ... ,X J, the integration can be computed using linear algebra (c.f. Weiss and Freeman 2000; Frey 2000). In contrast, exact local computation for the sigmoid function, P(y lXI, ... ,XJ) = 1/(1 + exp[-Bo - L:j BjXj]) , requires the full exponential sum. Barber (2000) considers approximating this sum using a central limit theorem approximation. In an "accumulator network" , the probability of a child given its parents is computed by accumulating the inputs from the parents in a Markov chain or more generally a tree. (For simplicity, we use Markov chains in this paper.) Fig. 2a and b show how a layered Bayesian network can be redrawn as an accumulator network. Each accumulation variable (state variable in the accumulation chain) has just 2 parents, and the number of computations needed for the sum-product computations for each variable in the original network now scales with the number of parents and the maximum state size of the accumulation chain in the accumulator network. Fig. 2c shows the general form of accumulator network considered in this paper, which corresponds to a fully connected Bayes net on variables Xl, ... ,X N. In this network, the variables are Xl, ... ,X N and the accumulation variables for Xi are Si,l, ... ,Si,i-l' The effect of variable Xj on child Xi is accumulated by Si,j' The joint distribution over the variables X = {Xi: i = 1, ... ,N} and the accumulation variables S = {Si,j : i = 1, ... ,N,j = 1, ... ,i -1} is N i-I P(X, S) = II [(II P(Si,j IXj, Si,j-l)) P(xilsi,i-d]. (6) i=l j=l If Xj is not a parent of Xi in the original network, we set P(si,jlxj,Si,j-t) = 1 if Si,j = Si,j-l and P(Si,j IXj, Si,j-l) = 0 if Si,j :I Si,j-l' A well-known example of an accumulator network is the noisy-OR network (Pearl 1988; Neal 1992). In this case, all variables are binary and we set if Si,j-l = 1, if Xj = 1 and Si,j-l = 0, otherwise, where Pi,j is the probability that Xj = 1 turns on the OR-chain. (7) Using an accumulation chain whose state space size equals the number of configurations of the parent variables, we can produce an accumulator network that can model the same joint distributions on Xl, ... , XN as any Bayesian network. Inference in an accumulator network is performed by passing messages as described above, either in parallel, at random, or in a regular fashion, such as up the accumulation chains, left to the variables, right to the accumulation chains and down the accumulation chains, iteratively. Later, we give results for an accumulator network that extracts images of translated, overlapping faces from an visual scene. The accumulation variables represent intensities of light rays at different depths in a layered 3-D scene. 2.1 Learning accuIIlulator networks To learn the conditional probability functions in an accumulator network, we apply the sum-product algorithm for each training case to compute sufficient statistics. Following Russell and Norvig (1995), the sufficient statistic needed to update the conditional probability function P(si,jlxj,Si,j-t) for Si,j in Fig. 2c is P(Si,j, Xj, Si,j_llobs). In particular, 8 log P(obs) _ P(Si,j, Xj, Si,j-llobs) 8P(Si,j IXj, Si,j-l) P(Si,j IXj, Si,j-l) (8) P(Si,j,Xj,Si,j-llobs) is approximated by normalizing the product of P(Si,j IXj, Si,j-l) and the). and 11'" messages arriving at Si,j. (This approximation is exact if the graph is a tree.) The sufficient statistics can be used for online learning or batch learning. If batch learning is used, the sufficient statistics are averaged over the training set and then the conditional probability functions are modified. In fact, the conditional probability function P(si,jlxj,Si,j-l) can be set equal to the normalized form of the average sufficient statistic, in which case learning performs approximate EM, where the E-step is approximated by the sum-product algorithm. 3 The bars problem Fig. 3a shows the network structure for the binary bars problem and Fig. 3b shows 30 training examples. For an N x N binary image, the network has 3 layers of binary variables: 1 top-layer variable (meant to select orientation); 2N middlelayer variables (mean to select bars); and N 2 bottom-layer image variables. For large N, performing exact inference is computationally intractable and hence the need for approximate inference. Accumulator networks enable efficient inference using probability propagation since local computations are made feasible. The topology of the accumulator network can be easily tailored to the bars problem, as described above. Given an accumulator network with the proper conditional probability tables, inference computes the probability of each bar and the probability of vertical (a) (b) (c) " 0 _c.=a 11.111 1IIImi 0 1IIIml _ -=-::::J _ DC"'1m 11111111 I1IIII , , . , # of Iterations Figure 3: (a) Bayesian network for bars problem. (b) Examples of typical images. (c) KL divergence between approximate inference and exact inference after each iteration versus horizontal orientation for an input image. After each iteration of probability propagation, messages are fused to produce estimates of these probabilities. Fig. 3c shows the Kullback Leibler divergence between these approximate probabilities and the exact probabilities after each iteration, for 5 input images. The figure also shows the most probable configuration found by approximate inference. In most cases, we found that probability propagation correctly infers the presence of appropriate bars and the overall orientation of the bars. In cases of multiple interpretations of the image (e.g., Fig. 3c, image 4), probability propagation tended to find appropriate interpretations, although the divergence between the approximate and exact inferences is larger. Starting with an accumulator network with random parameters, we trained the network as described above. Fig. 4 shows the online learning curves corresponding to different learning rates. The log-likelihood oscillates and although the optimum (horizontal line) is not reached, the results are encouraging. -:,~========:::;;",,;;;;; .. _;;;;;;;;;:.,~;;;; ... "","Tloo ~ .c s: - 10 ." g .£-12 ]! .:J ~-1 4 -" ..... 2 " # ofsweeps " xlD" Figure 4: Learning curves for learning rates .05, .075 and .1 4 Accumulating light rays for layered vision We give results on an accumulator network that extracts image components from scenes constructed from different types of overlapping face at random positions. Suppose we divide up a 3-D scene into L layers and assume that one of 0 objects can sit in each layer in one of P positions. The total number of object-position combinations per layer is K = 0 x P. For notational convenience, we assume that each object-position pair is a different object modeled by an opaqueness map (probability that each pixel is opaque) and an appearance map (intensity of each pixel). We constrain the opaqueness and appearance maps of the same object in different positions to be the same, up to translation. Fig. 5a shows the appearance maps of 4 such objects (the first one is a wall). In our model, Pkn is the probability that the nth pixel of object k is opaque and Wkn is the intensity of the nth pixel for object k. The input images are modeled by randomly picking an object in each of L layers, choosing whether each pixel in each layer is transparent or opaque, and accumulating light intensity by imaging the pixels through the layers, and then adding Gaussian noise. Fig. 6 shows the accumulator network for this model. zl E {I, .. . ,K} is the index (a) (b) Figure 5: (a) Learned appearance maps for a wall (all pixels dark and nearly opaque) and 3 faces. (b) An image produced by combining the maps in (a) and adding noise. (c) Objectspecific segmentation maps. The brightness of a pixel in the kth picture corresponds to the probability that the pixel is imaged by object k. of the object in the lth layer, where layer 1 is adjacent to the camera and layer Lis farthest from the camera. y~ is the accumulated discrete intensity of the light ray for pixel n at layer l. y~ depends on the identity of the object in the current layer zl and the intensity of pixel n in the previous layer y~+1. So, 1 1 Zl = 0, y~ = y~+1 zl > 0, y~ = W zl n = y~+l zl > 0 yl = W ...J. yl+l 'n z ln -r n zl > 0 yl = yl+l ...J. W 'n n -;zln otherwise. (9) Each condition corresponds to a different imaging operation at layer l for the light ray corresponding to pixel n. Xn is the discretized intensity of pixel n, obtained from the light ray arriving at the camera, y~. P(xnly~) adds Gaussian noise to y~. After training the network on 200 labeled images, zL we applied iterative inference to identify and 10~'::::::::===:::::::::--cate image components. After each iteration, the message passed from y~ to zl is an estimate of the probability that the light ray for pixel n is imaged by object zl at layer l (i.e., not occluded by other objects). So, for each object at each layer, we have an n-pixel "probabilistic segmentation map". In Fig. 5c we show the 4 maps in layer 1 corresponding to the objects shown in Fig. 5a, obtained after 12 iterations of the sum-product algorithm. One such set of segmentation maps can be drawn for each layer. For deeper layers, the maps hopefully segment the part of the scene that sits behind the objects in the shallower layers. Fig. 7a shows the sets of segmentation maps corresponding to different layers, after each iteration of probability propagation, for the input image shown on the far right. After 1 iteration, the segmentation in the Figure 6: An accumulator netfirst layer is quite poor, causing uncertain segmenwork for layered vision. tation in deeper layers (except for the wall, which is mostly segmented properly in layer 2). As iterations increases, the algorithm converges to the correct segmentation, where object 2 is in front, followed by objects 3, 4 and 1 (the wall). It may appear from the input image in Fig. 7a that another possible depth ordering is object 2 in front, followed by objects 4, 3 and 1 - i.e., objects 3 and 4 may be reversed. However, it turns out that if this were the order, a small amount of dark hair from the top of the horizontal head would be showing. We added an extremely large amount of noise the the image used above, to see what the algorithm would do when the two depth orders really are equally likely. Fig. 7b shows the noisy image and the series of segmentation maps produced at each layer as the number of iterations increases. The segmentation maps for layer 1 show that object 2 is correctly identified as being in the front. Quite surprisingly, the segmentation maps in layer 2 oscillate between the two plausible interpretations of the scene - object 3 in front of object 4 and object 4 in front of object 3. Although we do not yet know how robust these oscillations are, or how accurately they reflect the probability masses in the different modes, this behavior is potentially very useful. References D. Barber 2000. Tractable belief propagation. The Learning Workshop, Snowbird, UT. B. J. Frey and F. R. Kschischang 1996. Probability propagation and iterative decoding. Proceedings of the 34th Allerton Conference on Communication, Control and Computing 1996, University of Illinois at Urbana. (a) Layer 4 •••• •• [J. ' •• C D ••• D ••• D ••• D ••• D ••• D ••• D ••• D ••• D ••• (b) Layer 4 ••• . [J 1Il 1.[lC . [J 1Il •• [lC •• [J 1Il .[lC ..[J 1Il 1.[lC I.[J. •• [lC I.[J. Layer 3 • [] ... •• C ~ •• C ~ •• C ~ •• C ~ •• C ~ •• C ~ •• C ~ •• C ~ •• C ~ •• C Layer 3 • [I. [J. 1.[lC •• [J. 1.[lC I.[J. 1.[lC I.[J. I.IIC I.[J. 1.[lC . '. [J. Layer 2 III ~ .[]. I. [J ~ ~.[JI ~.[J~ ~.[J~ ~.[J~ ~.[J~ ~.[J~ ~.[J~ ~.[J~ ~.[JI Layer 2 1 111 l.lrI. 1.[lC I.[J • l.tlC I.[J. 1.[lC I.[J. .[lC •• [J. •• [lC •• [J • Layer 1 11)[1 • 1l[][I. I[][lill Il[][lill " Il[][lill Il[][lill Il[][lill Il[)[lill Il[)[lill Il[][lill Il[][lill Il[][lill Layer 1 1111. 1l0[l. 101111 10[1. 101111 10[1. IHIII [10[1. IGIIII 10[1 • 1111111 10[1 • Figure 7: (a) Probabilistic segmentation maps for each layer (column) after each iteration (row) of probability propagation for the image on the far right. (b) When a C large amount of noise is added to the image. the network B. J. Frey and D. J.. ' 11 b . . M K 1998 A I t· OSCI ates etween interpretations. ac ay . revo u IOn: Belief propagation in graphs with cycles. In M. I. Jordan, M. I. Kearns and S. A. Solla (eds) Advances in Neural Information Processing Systems 10, MIT Press, Cambridge MA. M. I. Jordan, Z. Ghahramani, T. S. Jaakkola and L. K Saul 1999. An introduction to variational methods for graphical models. In M. I. Jordan (ed) Learning in Graphical Models, MIT Press, Cambridge, MA. R. McEliece, D. J. C. MacKay and J. Cheng 1998. Turbodecoding as an instance of Pearl's belief propagation algorithm. IEEE Journal on Selected Areas in Communications 16:2. K P. Murphy, Y. Weiss and M. I. Jordan 1999. Loopy belief propagation for approximate inference: An empirical study. Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann, San Francisco, CA. J. Pearl 1988. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, San Mateo CA. S. Russell and P. Norvig 1995. Artificial Intelligence: A Modern Approach. Prentice-Hall. Y. Weiss and W. T. Freeman 2000. Correctness of belief propagation in Gaussian graphical models of arbitrary topology. In S.A. Solla, T. KLeen, and K-R. Miiller (eds) Advances in Neural Information Processing Systems 12, MIT Press.
2000
45
1,845
Sparse Representation for Gaussian Process Models Lehel Csat6 and Manfred Opper Neural Computing Research Group School of Engineering and Applied Sciences B4 7ET Birmingham, United Kingdom {csat ol, opper m} @as t on. ac .uk Abstract We develop an approach for a sparse representation for Gaussian Process (GP) models in order to overcome the limitations of GPs caused by large data sets. The method is based on a combination of a Bayesian online algorithm together with a sequential construction of a relevant subsample of the data which fully specifies the prediction of the model. Experimental results on toy examples and large real-world data sets indicate the efficiency of the approach. 1 Introduction Gaussian processes (GP) [1; 15] provide promising non-parametric tools for modelling real-world statistical problems. Like other kernel based methods, e.g. Support Vector Machines (SVMs) [13], they combine a high flexibility ofthe model by working in high (often 00) dimensional feature spaces with the simplicity that all operations are "kernelized" i.e. they are performed in the (lower dimensional) input space using positive definite kernels. An important advantage of GPs over other non-Bayesian models is the explicit probabilistic formulation of the model. This does not only provide the modeller with (Bayesian) confidence intervals (for regression) or posterior class probabilities (for classification) but also immediately opens the possibility to treat other nonstandard data models (e.g. Quantum inverse statistics [4]). Unfortunately the drawback of GP models (which was originally apparent in SVMs as well, but has now been overcome [6]) lies in the huge increase of the computational cost with the number of training data. This seems to preclude applications of GPs to large datasets. This paper presents an approach to overcome this problem. It is based on a combination of an online learning approach requiring only a single sweep through the data and a method to reduce the number of parameters representing the model. Making use of the proposed parametrisation the method extracts a subset of the examples and the prediction relies only on these basis vectors (BV). The memory requirement of the algorithm scales thus only with the size of this set. Experiments with real-world datasets confirm the good performance of the proposed method. 1 1 A different approach for dealing with large datasets was suggested by V. Tresp [12]. His method 2 Gaussian Process Models GPs belong to Bayesian non-parametric models where likelihoods are parametrised by a Gaussian stochastic process (random field) a(x) which is indexed by the continuous input variable x . The prior knowledge about a is expressed in the prior mean and the covariance given by the kernel Ko(x,x') = Cov(a(x), a(x')) [14; 15]. In the following, only zero mean GP priors are used. In supervised learning the process a(x) is used as a latent variable in the likelihood P(yla(x)) which denotes the probability of output Y given the input x . Based on a set of input-output pairs (xn, Yn) with Xn E Rm and Yn E R (n = 1, N) the Bayesian learning method computes the posterior distribution of the process a(x) using the prior and likelihood [14; 15; 3]. Although the prior is a Gaussian process, the posterior process usually is not Gaussian (except for the special case of regression with Gaussian noise). Nevertheless, various approaches have been introduced recently to approximate the posterior averages [11; 9]. Our approach is based on the idea of approximating the true posterior process p{ a} by a Gaussian process q{a} which is fully specified by a covariance kernel Kt(x,x') and posterior mean (a(x))t, where t is the number of training data processed by the algorithm so far. Such an approximation could be formulated within the variational approach, where q is chosen such that the relative entropy D(q,p) == Eq In ~ is minimal [9]. However, in this formulation, the expectation is over the approximate process q rather than over p. It seems intuitively better to minimise the other KL divergence given by D(p, q) == Ep In ~, because the expectation is over the true distribution. Unfortunately, such a computation is generally not possible. The following online approach can be understood as an approximation to this task. 3 Online learning for Gaussian Processes In this section we briefly review the main idea of the Bayesian online approach (see e.g. [5]) to GP models. We process the training data sequentially one after the other. Assume we have a Gaussian approximation to the posterior process at time t. We use the next example t + 1 to update the posterior using Bayes rule via p(a) = P(Yt+1la(Xt+l))Pt(q) (P(Yt+1la(xt+1)))t Since the resulting posterior p(q) is non-Gaussian, we project it to the closest Gaussian process q which minimises the KL divergence D(p, q). Note, that now the new approximation q is on "correct" side of the KL divergence. The minimisation can be performed exactly, leading to a match of the means and covariances of p and q. Since p is much less complex than the full posterior, it is possible to write down the changes in the first two moments analytically [2]: (a(x))t+1 = (a(x))t + b1 Kt(x,xt+d Kt+1(x,x') = Kt(x,x') + b2Kt(x,xt+1)Kt(xt+1,x') where the scalar coefficients b1 and b2 are: (1) (2) with averaging performed with respect to the marginal Gaussian distribution of the process variable a at input Xt+1' Note, that this yields a one dimensional integral! Derivatives are is based on splitting the data-set into smaller subsets and training individual GP predictors on each of them. The final prediction is achieved by a specific weighting of the individual predictors. ~' /:'~es (a) <PH! , ,-------- (b) Figure 1: Projection of the new input <Pt+! to the subspace spanned by previous inputs. <l>t+l is the projection to the linear span of {<Pih=l,t. and <Pres the residual vector. Subfigure (a) shows the projections to the subspace, and (b) gives a geometric picture of the "measurable part" of the error It+! from eq. (8). taken with respect to (a(x))t . Note also that this procedure does not equal the extended Kalman filter which involves linearisations of likelihoods, whereas in our approach it is possible to use non-smooth likelihoods (e.g. noise free classifications) without problems. It turns out, that the recursion (1) is solved by the parametrisation (a(x))t = L~=IKo(x,xi)at(i) Kt(x,x') = Ko(x,x') + LL=IKo(x,Xi)Ct(ij)Ko(xj,x') (3) such that in each on-line step, we have to update only the vector of a's and the matrix of C's. For notational convenience we use vector at = [at(1), ... , at (N)jT and matrix C t = {Ct (ij) h,j=I,N. Zero-mean GP with kernel Ko is used as the starting point for the algorithm: ao = a and Co = a will be the starting parameters. The update of the parameters defined in (3) is found to be at+! = at + bl [Ctkt+l + et+!l C t+! = C t + b2 [C tkt+l + et+!l [C tkt+! + et+lf (4) with kt+! = [KO(XI,Xt+!), ... , Ko(xt ,xt+!)jT, et+! the t + 1-th unit vector (all components except t + 1-th are zero), and the scalar coefficients bl and b2 computed from (2). The serious drawback of this approach, which it shares with many other kernel methods, is the quadratic increase of the matrix size with the training data. 4 Sparse representation We use the following idea for reducing the increase of the size of C and a (for a similar approach see [8]). We consider the feature expansion of the kernel Ko(x,x') = <p(X)T <p(x') and decompose the new feature vector <p(Xt+!) as a linear combination of the previous features and a residual <Pres: A "t A <p(Xt+!) = <Pt+! + <Pres = ~ i=l ei<p(Xi) + <Pres (5) where <l>t+! is the projection of <Pt+! to the previous inputs and et+! = [el' .. . ' etjT are the coordinates of <l>t+! with respect to the basis {<Pih=l,t. We can then re-express the GP means: (6) with Qt+l(i) = at+l(i) + et+1(i)at+1(t + 1) and 'YHI the residual (or novelty factor) associated with the new feature vector. The vector et+1 and the residual term 'Yt+1 are all expressed in terms of kernels: A _ K(-I)k - k* kT K(-I)k (7) et+1 B HI 'Yt+1 t+1 t+1 B t+1 with KB(ij) = {KO(Xi,Xj)h,j=l,t and kt+1 = K o(Xt+1,Xt+1). The relation between the quantities et+1 and 'Yt+1 is illustrated in Figure 1. Neglecting the last term in the decomposition of the new input (5) and performing the update with the resulting vector is equivalent to the update rule (4) with et+1 replaced by et+1. Note that the dimension of parameter space is not increased by this approximative update. The memory required by the algorithm scales quadratically only with the size of the set of "basis vectors", i.e. those examples for which the full update (4) is made. This is similar to Support Vectors [13], without the need to solve the (high dimensional) convex optimisation problem. It is also related to the kernel PCA and the reduced set method [8] where the full solution is computed first and then a reduced set is used for prediction. Replacing the input vector cJlt+1 by its projection on the linear span of the BVs when updating the GP parameters induces changes in the GP2. However, the replacement of the true feature vector by its approximation leaves the mean function unchanged at each B V i = 1, t. That is, the functions (a(x))t+1 from (6) and (a(x))t+1 = L~=I Qt+1(i)Ko(Xi,X) have the same value at all Xl. The change at Xt+1 is Ct+1 = l(a(xt+1))t+1 - (a(xt+t})t+11 = Ibl l'Yt+1 (8) with bi the factor from (2). As a consequence, a good approximation to the full GP solution is obtained if the input for which we have only a small change in the mean function of the posterior process is not included in the set of BV s. The change is given by Ct+1 and the decision of including Xt+1 or not is based on the "score" associated to it. The absence of matrix inversions is an important issue when dealing with large datasets. The matrix inversion from the projection equation (7) can be avoided by iterative inversion3 of the Gram matrix Q = Ki/: Qt+1 = Qt + 'Yt;1 (et+1 - et+t) (et+1 - et+If (9) An important comment is that if the new input is in the linear span of the BVs, then it will not be included in the basis set, avoiding thus: 1.) the small singular values of the matrix K Band 2.) the redundancy in representing the problem. 4.1 Deleting a basis vector The above section gave a method to leave out a vector that is not significant for the prediction purposes. However, it did not provide us with a method to eliminate one of the already existing BV-s. Let us assume that an input Xt+1 has just been added to the set of BV s. Since we know that an addition had taken place, the update rule (4) with the t + 1-th unit vector et+1 was last performed. Since the model parameters at the previous step had an empty t + 1-th row and column, the parameters before thefull update can be identified. The removal of the last basis vector can be done with the following steps: 1) computing the parameters before the update of the GP and 2) performing a reduced update of the 2Equation (7) also minimises the KL-distance between the full posterior (the one that increases parameter space) and a parametric distribution using only the old BVs. 3 A guide is available from Sam Roweis: http://www.gatsby.ucl.ac.uk/rvroweis/notes.html C t+ l Qt+l d t) c* dt) Q ....................... , ... C * T c* Q *T q* Figure 2: Decomposition of model parameters for the update equation (10). model without the inclusion of the basis vector (eq. (4) using et+1)' The updates for model parameters a, C, and Q are "inverted" by inverting the coupled equations (4) and (9): Q* & = a(t) - a*q* C = C(t) + c* Q*Q*T _ ~ [Q*C*T + C*Q*T] q*2 q* Q*Q*T Q=Q(t) __ _ q* (10) where the elements needed to update the model are extracted from the extended parameters as illustrated in Figure 2. The consequence of the identification permits us to evaluate the score for the last BV. But since the order of the BVs is approximately arbitrary, we can assign a score to each BV lat+l(i)1 Ci = Qt+1 (i, i)' (11) Thus we have a method to estimate the score of each basis vector at any time and to eliminate the one with the least contribution to the GP output (the mean), providing a sparse GP with a full control over memory size. 5 Simulation results To apply the online learning rules (4), the data likelihood for the specific problem has to be averaged with respect to a Gaussian. Using eq. (2), the coefficients b1 and b2 are obtained. The marginal of the GP at Xt+1 is a normal distribution with mean (a(Xt+1))t = a[kt+1 and variance 0';'+1 = kt+1 +k;+1 C tkt+1 where the GP parameters at time t are considered. As a first example, we consider regression with Gaussian output noise 0'5 for which 1 ( 2 2 ) (Yt+1 - (a(Xt+1)t)2 In(P(Yt+1l a(Xt+d))t=-2"ln 271'(0'0+0'11:'+1) 2( 2+ 2 )2 0'0 0'11:'+1 (12) For classification we use the probit model. The outputs are binary Y E {-I, I} and the probability is given by the error function (where u = ya/O'o): P(yla) = Erf = . f(C dte-t /2 ( ya) 1 l U 2 0'0 V 271' 00 The averaged log-likelihood for the new data point at time tis: ( ( I ( ))) ( Yt+1 a[kt+1 ) P Yt+1 a Xt+1 = Erf j 0'5 + O'i'+l (13) 1.4 1.2 0.8 0.6 0.4 ~ 0.2 -0.2 ~ - 0.4 -3 -2 ~ ," -1 (a) 100 150 200 250 300 350 400 450 500 550 # of BasIs Vectors (b) Figure 3: Simulation results for regression (a) and classification (b). For details see text. For the regression case we have chosen the toy data model y = sin(x)/x + ( where ( is a zero-mean Gaussian random variable with variance (]"~ and an RBF kernel. Figure 3.a shows the result of applying the algorithm for 600 input data and restricting the number of BVs to 20. The dash-dotted line is the true function, the continuous line is the approximation with the Bayesian standard deviation plotted by dotted lines (a gradient-like approximation for the output noise based on maximising the likelihood (12) lead us to the variance with which the data has been generated). For classification we used the data from the US postal database4 of handwritten zip codes together with an RBF kernel. The database has 7291 training and 2007 test data of 16 x 16 grey-scale images. To apply the classification method to this database, 10 binary classification problems were solved and the final output was the class with the largest probability. The same BVs have been considered for each classifier and if a deletion was required, the BV having the minimum cumulative score was deleted. The cumulative score was chosen to be the maximum of the scores for each classifier. Figure 3.b shows the test error as a function of the size of the basis set. We find that the test error is rather stable over a considerable range of basis set sizes. Also a comparison with a second sweep through the data shows that the algorithm seems to have already extracted the relevant information out of the data within a single sweep. Using a polynomial kernel for the USPS dataset and 500 BVs we achieved a test error of 4.83%, which compares favourably with other sparse approaches [10; 8] but uses smaller basis sets than the SVM (2540 reported in [8]). We also applied our algorithm to the NIST datasetS which contains 60000 data. Using a fourth order polynomial kernel with only 500 BVs we achieved a test error of 3.13% and we expect that improvements are possible by using a kernel with tunable hyperparameters. The possibility of computing the posterior class probabilities allows us to reject data. When the test data for which the maximum probability was below 0.5 was rejected, the test error was 1.53% with 1.60% of rejection rate. 4Prom: http://www.kernel-machines.org/data.html 5 Available from: http://www.research.att.comryann/ocr/rnnist/ 6 Conclusion and further research This paper presents a sparse approximation for GPs similar to the one found in SVMs [13] or relevance vector machines [10]. In contrast to these other approaches our algorithm is fully online and does not construct the sparse representation from the full data set (for sequential optimisation for SVM see [6]). An important open question (besides the issue of model selection) is how to choose the minimal size of the set of basis vectors such that the predictive performance is not much deteriorated by the approximation involved. In fact, our numerical classification experiments suggest that the prediction performance is considerably stable when the basis set is above a certain size. It would be interesting if one could relate this minimum size to the effective dimensionality of the problem being defined as the number of feature dimensions which are well estimated by the data. One may argue as follows: Replacing the true kernel by a modified (finite dimensional) one which contains only the well estimated features will not change the predictive power. On the other hand, for kernels with a feature space of finite dimensionality M, it is easy to see that we need never more than M basis vectors, because of linear dependence. Whether such reasoning will lead to a practical procedure for choosing the appropriate basis set size, is a question for further research. 7 Acknowledgement This work was supported by EPSRC grant no. GRlM81608. References [1] J. M. Bernardo and A. F. Smith. Bayesian Theory. John Wiley & Sons, 1994. [2] L. Csat6, E. Fokoue, M. Opper, B. Schottky, and O. Winther. Efficient approaches to Gaussian process classification. In NIPS, volume 12, pages 251- 257. The MIT Press, 2000. [3] M. Gibbs and D. J. MacKay. Efficient implementation of Gaussian processes. Technical report, http://wol.ra.phy.cam.ac.uklmackay/abstracts/gpros.html. 1999. [4] J. C. Lemm, J. Uhlig, and A. Weiguny. A Bayesian approach to inverse quantum statistics. Phys.Rev.Lett. , 84:2006, 2000. [5] M. Opper. A Bayesian approach to online learning. In Saad [7], pages 363- 378. [6] J. C. Platt. Fast training of Support Vector Machines using sequential minimal optimisation. In Advances in Kernel Methods (Support Vector Learning). [7] D. Saad, editor. On-Line Learning in Neural Networks. Cambridge Univ. Press, 1998. [8] B. Scholkopf, S. Mika, C. J. Burges, P. Knirsch, K-R. Miiller, G. Ratsch, and A. J. Smola. Input space vs. feature space in kernel-based methods. IEEE Transactions on Neural Networks, 10(5):1000-1017, September 1999. [9] M. Seeger. Bayesian model selection for Support Vector Machines, Gaussian processes and other kernel classifiers. In S. A. Solla, T. KLeen, and K-R. Miiller, editors, NIPS, volume 12. The MIT Press, 2000. [10] M. Tipping. The Relevance Vector Machine. In S. A. Solla, T. KLeen, and K-R. Miiller, editors, NIPS, volume 12. The MIT Press, 2000. [11] G. F. Trecate, C. K 1. Williams, and M. Opper. Finite-dimensional approximation of Gaussian processes. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors, NIPS, volume 11. The MIT Press, 1999. [12] v. Tresp. A Bayesian committee machine. Neural Computation, accepted. [13] V. N. Vapnik. The Nature oj Statistical Learning Theory. Springer-Verlag, New York, NY, 1995. [14] C. K 1. Williams. Prediction with Gaussian processes. In M. 1. Jordan, editor, Learning in Graphical Models. The MIT Press, 1999. [15] C. K 1. Williams and C. E. Rasmussen. Gaussian processes for regression. In D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo, editors, NIPS, volume 8. The MIT Press, 1996.
2000
46
1,846
Hippocampally-Dependent Consolidation in a Hierarchical Model of Neocortex Szabolcs Ka1i1,2 Peter Dayan1 1 Gatsby Computational Neuroscience Unit University College London 17 Queen Square, London, England, WCIN 3AR. 2Department of Brain and Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139, U.S.A. szabolcs@gatsby.ucl.ac.uk Abstract In memory consolidation, declarative memories which initially require the hippocampus for their recall, ultimately become independent of it. Consolidation has been the focus of numerous experimental and qualitative modeling studies, but only little quantitative exploration. We present a consolidation model in which hierarchical connections in the cortex, that initially instantiate purely semantic information acquired through probabilistic unsupervised learning, come to instantiate episodic information as well. The hippocampus is responsible for helping complete partial input patterns before consolidation is complete, while also training the cortex to perform appropriate completion by itself. 1 Introduction The hippocampal formation and adjacent cortical areas have long been believed to be involved in the acquisition and retrieval of long-term memory for events and other declarative information. Clinical studies in humans and animal experiments indicate that damage to these regions results in amnesia, whereby the ability to acquire new declarative memories is impaired and some of the memories acquired before the damage are lost [I]. The observation that recent memories are more likely to be lost than old memories in these cases has generally been interpreted as evidence that the role of these medial temporal lobe structures in the storage and/orretrieval of declarative memories is only temporary. In particular, several investigators have advocated the general idea that, in the course of a relatively long time period (from several days in rats up to decades in humans), memories are reorganized (or consolidated) so that memories whose successful recall initially depends on the hippocampus gradually become independent of this structure (see Refs. 2-4). However, other possible interpretations of the data have also been proposed [5]. There have been several analyses of the computational issues underlying consolidation. There is a general consensus that memory recall involves the reinstatement of cortical activation patterns which characterize the original episodes, based only on partial or noisy input. Thus the computational goal for the memory systems is cortical pattern completion; this should be possible after just a single presentation of the particular pattern when the hippocampus is intact, and should be possible independent of the presence or absence of the hippocampus once consolidation is complete. The hippocampus plays a double role: a) supporting one-shot learning and subsequent completion of patterns in the cortical areas it is directly connected to, and b) directing consolidation by reinstating these stored patterns in those same cortical regions and allowing the efficacies of cortical synapses to change. Despite the popularity of the ideas outlined above, there have been surprisingly few attempts to construct quantitative models of memory consolidation. Alvarez and Squire (1994) is the only model we could find that has actually been implemented and tested quantitatively. Although it embodies the general principles above, the authors themselves acknowledge that the model has some rather serious limitations, largely due to its spartan simplicity (eg it only considers 2 perfectly orthogonal patterns over 2 cortical areas of 8 units each) which also makes it hard to test comprehensively. Perhaps most importantly, though (and this feature is shared with qualitative models such as Murre (1997», the model requires some way of establishing and/or strengthening functional connections between neurons in disparate areas of neocortex (representing different aspects of the same episode) which would not normally be expected to enjoy substantial reciprocal anatomical connections. In this paper, we consider consolidation using a model whose complexity brings to the fore consideration of computational issues that are invisible to simpler proposals. In particular, it treats cortex as a hierarchical structure, with hierarchical codes for input patterns acquired through a process of unsupervised learning. This allows us to study the relationship between coding for generic patterns, which forms a sort of semantic memory, and the coding for the specific patterns through consolidation. It also allows us to consider consolidation as happening in hierarchical connections (in which the cortex abounds) as an alternative to consolidation only between disparate areas at the same level of the hierarchy. The next section of the paper describes the model in detail and section 3 shows its performance. 2 The Model Figure la shows the architecture of the model, which involves three cortical areas (A, B, and C) that represent different aspects of the world. We can understand consolidation as follows: across the whole spectrum of possible inputs, there is structure in the activity within each area; but there are no strong correlations between the activities in different areas (these are the generic patterns referred to above). Thus, for instance, nothing in particular can be concluded about the pattern of activity in area C given just the activities in areas A and B. However, for the specific patterns that form particular episodes, there are correlations in these activities. As a result of this, it becomes possible to be much more definite about the pattern in C given activities in A and B that reinstate part of the episode. Before consolidation, information about these correlations is stored in the hippocampus and related structures; after consolidation, the information is stored directly in the weights that construct cortical representations. The model does not assume that there are any direct connections between the cortical areas. Instead, as a closer match to the available anatomical data, we assume a hierarchy of cortical regions (in the present model having just two layers) below the hippocampus. It is hard to establish an exact correspondence between model components and anatomical regions, so we tentatively call the model region on the top of the cortical hierarchy entorhinal/parahippocampal/perirhinal area (E/P), and lump together all parts of the hippocampal formation into an entity we call hippocampus (HC). EIP is connected bidirectionally to all the cortical areas.
2000
47
1,847
Automated State Abstraction for Options using the U-Tree Algorithm Anders Jonsson, Andrew G. Barto Department of Computer Science University of Massachusetts Amherst, MA 01003 {ajonsson,barto}@cs.umass.edu Abstract Learning a complex task can be significantly facilitated by defining a hierarchy of subtasks. An agent can learn to choose between various temporally abstract actions, each solving an assigned subtask, to accomplish the overall task. In this paper, we study hierarchical learning using the framework of options. We argue that to take full advantage of hierarchical structure, one should perform option-specific state abstraction, and that if this is to scale to larger tasks, state abstraction should be automated. We adapt McCallum's U-Tree algorithm to automatically build option-specific representations of the state feature space, and we illustrate the resulting algorithm using a simple hierarchical task. Results suggest that automated option-specific state abstraction is an attractive approach to making hierarchical learning systems more effective. 1 Introduction Researchers in the field of reinforcement learning have recently focused considerable attention on temporally abstract actions (e.g., [1,3,5,6,7,9]). The term temporally abstract describes actions that can take variable amounts of time. One motivation for using temporally abstract actions is that they can be used to exploit the hierarchical structure of a problem. Among other things, a hierarchical structure is a natural way to incorporate prior knowledge into a learning system by allowing reuse of temporally abstract actions whose policies were learned in other tasks. Learning in a hierarchy can also significantly reduce the number of situations between which a learning agent needs to discriminate. We use the framework of options [6, 9], which extends the theory of reinforcement learning to include temporally abstract actions. In many cases, accurately executing an option's policy does not depend on all state features available to the learning agent. Further, the features that are relevant often differ from option to option. Within a hierarchical learning system, it is possible to perform option-specific state abstraction by which irrelevant features specific to each option are ignored. Using option-specific state abstraction in a hierarchical learning system can save memory through the development of compact state representations, and it can accelerate learning because of the generalization induced by the abstraction. Dietterich [2] introduced action-specific state abstraction in a hierarchy of temporally abstract actions. However, his approach requires the system developer to define a set of relevant state features for each action prior to learning. As the complexity of a problem grows, it becomes increasingly difficult to hand-code such state representations. One way to remedy this problem is to use an automated process for constructing state representations. We apply McCallum's U-Tree algorithm [4] to individual options to achieve automated, option-specific state abstraction. The U-Tree algorithm automatically builds a state-feature representation starting from one that makes no distinctions between different observation vectors. Thus, no specification of state-feature dependencies is necessary prior to learning. In Section 2, we give a brief description of the U-Tree algorithm. Section 3 introduces modifications necessary to make the U-Tree algorithm suitable for learning in a hierarchical system. We describe the setup of our experiments in Section 4 and present the results in Section 5. Section 6 concludes with a discussion of future work. 2 The U -Tree algorithm The U-Tree algorithm [4] retains a history of transition instances Tt =< Tt- l ,at- I , Tt , St > composed of the observation vector, St , at time step t, the previous action, at-l , the reward, Tt, received during the transition into St, and the previous instance, Tt- l. A decision treethe U-Tree-sorts a new instance Tt based on its components and assigns it to a unique leaf of the tree. The distinctions associated with a leaf are determined by the root-to-leaf path. For each leaf-action pair (Lj ,a), the algorithm keeps an action value Q(Lj ,a) estimating the future discounted reward associated with being in Lj and executing a. The utility of a leaf is denoted U(Lj ) = maxaQ(Lj,a). The algorithm also keeps a model consisting of estimated transition probabilities Pr( Lk IL j, a) and expected immediate rewards R( L j, a) computed from the transition instances. The model is used in performing one sweep of value iteration after the execution of each action, modifying the values of all leaf-action pairs (Lj ,a): Q(Lj,a) ~ R(Lj,a) + yLPr(Lk ILj ,a)U (Lk ). Lk One can use other reinforcement learning algorithms to update the action values, such as Q-learning or prioritized sweeping. The U-Tree algorithm periodically adds new distinctions to the tree in the form of temporary nodes, called fringe nodes, and performs statistical tests to see whether the added distinctions increase the predictive power ofthe U-Tree. Each distinction is based on (1) a perceptual dimension, which is either an observation or a previous action, and (2) a history index, indicating how far back in the current history the dimension will be examined. Each leaf of the tree is extended with a subtree of a fixed depth, z, constructed from permutations of all distinctions not already on the path to the leaf. The instances associated with the leaf are distributed to the leaves of the added subtree-the fringe nodes-according to the corresponding distinctions. A statistical test, the Kolmogorov-Smirnov (KS) test, compares the distributions of future discounted reward of the leaf node's policy action with that of a fringe node's policy action. The distribution of future discounted reward associated with a node Lj and its policy action a = argmaxa Q(Lj ,a) is composed of the estimated future discounted reward of individual instances Tt E T(Lj , a) given by: V(Tt ) = Tt+ l + yLPr(Lk ILj,a)U(Lk ). Lk The KS test outputs a statistical difference dL ,Lk E [0,1] between the distributions of two nodes Lj and Lk. The U-Tree algorithm retains the subtree of distinctions i at a leaf Lj if the sum of the KS statistical differences over the fringe nodes F(Lj , i) of the subtree is (1) larger than the sum of the KS differences of all other subtrees, and (2) exceeds some threshold 8. That is, the tree is extended from leaf L j with a subtree i of new distinctions if for all subtrees m -=I- i: ~ dL F(L i) > ~ dL F(L· m) £..J J l } 1 £,..; ) 1 ) 1 F(Lj ,i) F(Lj ,m) and ~ dL· F(L· i) > 8. £,..; J' J ' F(Lj,i) Whenever the tree is extended, the action values of the previous leaf node are passed on to the new leaf nodes. One can restrict the number of distinctions an agent can make at anyone time by imposing a limit on the depth of the V-Tree. The length of the history the algorithm needs to retain depends only on the tree size and not on the size of the overall state set. Consequently, the algorithm has the potential to scale well to large tasks. In previous experiments, the V-Tree algorithm was able to learn a compact state representation together with a satisfactory policy in a complex driving task [4]. A version of the V-Tree algorithm suitable for continuous state spaces has also been developed and successfully used in robot soccer [10]. 3 Adapting the U -Tree algorithm for options We now turn to the issue of adapting the V-Tree algorithm for use with options and hierarchicallearning architectures. Given a finite Markov decision process with state set S, an option 0 =< 1, rr, ~ > consists of a setl ~ S of states from which the option can be initiated, a closed-loop policy rr for the choice of actions, and a termination condition ~ which, for each state, gives the probability that the option will terminate when that state is reached. Primitive actions generalize to options that always terminate after one time step. It is easy to define hierarchies of options in which the policy of an option can select other options. A local reward function can be associated with an option to facilitate learning the option's policy. What makes the V-Tree algorithm so suitable for performing option-specific state abstraction is that a V-Tree simultaneously defines a state representation and a policy over this representation. With a separate V-Tree assigned to each option, the algorithm is able to perform state abstraction separately for each option while modifying its policy. Because options at different levels of a hierarchy operate on different time scales, their transition instances must take different forms. To make our scheme work, we need to add a notion of temporal abstraction to the definition of a transition instance: Definition: A transition instance of an option 0 has the form "fro =< ~~ k' O/-k, R/ , s/ >, where s/ is the observation vector at time step t, O/-k is the option previously executed by option 0, terminating at time t and with a duration k, R/ = I.~ 11- 1 r/- k+i is the discounted sum of rewards received during the execution of O/- b and ~~ k is the previous instance. Since options at one level in a hierarchy are executed one at a time, they will each experience a different sequence of transition instances. For the V-Tree algorithm to work under these conditions, the U-Tree of each option has to keep its own history of instances and base distinctions on these instances alone. The V-Tree algorithm was developed for infinite-horizon tasks. Because an option terminates and may not be executed again for some time, its associated history will be made up of finite segments corresponding to separate executions of the option. The first transition Figure 1: The Taxi task instance recorded during an execution is independent of the last instance recorded during a previous execution. Consequently, we do not allow updates across segments. With these modifications, the V-Tree algorithm can be applied to hierarchical learning with options. 3.1 Intra-option learning When several options operate in the same parts of the state space and choose from among the same actions, it is possible to learn something about one option from the behavior generated by the execution of other options. In a process called intra-option learning [8], the action values of one option are updated based on actions executed in another, associated option. The update only occurs if the action executed in the latter has a non-zero probability of being executed in the former. Similarly, we can base distinctions in the V-Tree associated with one option on transition instances recorded during the execution of another option. We do this by adding instances recorded during the execution of one option to the history of each associated option. By associating each instance with a vector of leaves, one for the V-Tree of each option, this approach does not require additional memory for keeping multiple copies of an instance. For the scheme to work, we introduce a vector of rewards Rt = {Kj'} in an instance ~o, where Rf' is the discounted sum of local rewards for each option 0' associated with Ot-k. 4 Experiments We tested our version of the V-Tree algorithm on the Taxi task [1], in which an agentthe taxi-moves around on a grid (Figure 1). The taxi is assigned the task of delivering passengers from their locations to their destination, both chosen at random from the set of pick-up/drop-off sites P = {1,2,3,4}. The taxi agent's observation vector s = (x,y,i,d) is composed of the (x,y)-position of the taxi, the location i E P U {taxi} of the current passenger, and this passenger's destination d E P. The actions available to the taxi are Pick-up, Drop-off, and Move(m), m E {N,E,S, w}, the four cardinal directions. When a passenger is delivered, a new passenger appears at a random pickup site. The rewards provided to the taxi are: 19 for delivering the passenger - 11 for illegal Pick-up or Drop-off - 1 for any other action (including moving into walls) To aid the taxi agent we introduced four options: Navigate(p) =< [P, 1tP, ~P >, p E P, where, letting S denote the set of all observation vectors and GP = {(x,y, i , d) E S I (x,y) is the location of p}: [P: S - GP 1tP : the policy for getting to GP that the agent is trying to learn ~P : 1 if s E GP; 0 otherwise. We further introduced a local reward Rf for Naviga te(p), identical to the global reward provided to the agent with the exception that Rf = 9 for reaching GP. In our application of the V-Tree algorithm to the taxi problem, the history of each option had a maximum length of 6,000 instances. If this length was exceeded, the oldest instance in the history was discarded. Expanding the tree was only considered if there were more than 3,000 instances in the history. We set the expansion depth z to 1 and the expansion threshold 8 to 1.0, except when no distinctions were present in the tree, in which case 8 = 0.3. The algorithm used this lower threshold when the agent was not able to make any distinctions because it is difficult in this case to accumulate enough evidence of statistical difference to accept a distinction. Since the V-Tree algorithm does not go back and reconsider distinctions in the tree, it is important to reduce the number of incorrect distinctions due to sparse statistical evidence. Therefore, our implementation only compared two distributions of future discounted reward between leaves if each contained more than 15 instances. Because the taxi task is fully observable, we set the history index of the V-tree algorithm to zero. For exploration, the system used an £-softmax strategy, which picks a random action with probability £ and performs softmax otherwise. Normally, tuning the softmax temperature 't provides a good balance between exploration and exploitation, but as the V-Tree evolves, a new value of't may improve performance. To avoid re-tuning 't, the £-random part ensured that all actions were executed regularly. We designed one set of experiments to examine the efficiency of intra-option learning. We randomly selected one of the options Naviga te(p) to execute, and randomly selected a new position for the taxi whenever it reached p, ignoring the issue of delivering a passenger. At the beginning of each learning run, we assigned a V-Tree containing a single node to each option. In one set of runs, the algorithm used intra-option learning, and in another set, it used regular learning in which the V-Trees of different options did not share any instances. In a second set of experiments, the policies of the options and the overall Taxi task were learned in parallel. We allowed the policy of the overall task to choose between the options Navigate(p), and the actions pick-up and Drop-off. The reward provided for the overall task was the sum of global reward and local reward of the option currently being executed (cf. Digney [3]). When a passenger was delivered, a new taxi position was selected randomly and a new passenger appeared at a randomly selected pickup site. 5 Results The results from the intra-option learning experiments are shown in Figure 2. The graphs for intra-option learning (solid) and regular learning (broken) are averaged over 5 independent runs. We tuned 't and £ for each set of learning runs to give maximum performance. At intervals of 500 time steps, the V-Trees of the options were saved and evaluated separately. The evaluation consisted of fixing a target, repeatedly navigating to that target for 25,000 time steps, randomly repositioning the taxi every time the target was reached, repeating for all targets, and adding the rewards. From these results, We conclude that (1) intra-option learning converges faster than regular learning, and (2) intra-option learning achieves a higher level of performance. Faster convergence is due to the fact that the histories associated with the options fill up more quickly during intra-option learning. Higher performance is achieved because the amount of evidence is larger. The target of an option is only reached once during each execution of the option, whereas it might be reached several times during the execution of another option. In the second set of experiments, we performed 10 learning runs, each with a duration of 0 4 -Intraopllon Regulllr 2 2 5 3 Time steps Figure 2: Comparison between intra-option and regular learning 200,000 time steps. Figure 3 shows an example of the resulting V -Trees. Nodes that represent distinctions are drawn as circles, and leaf nodes are shown as squares or, in most cases, omitted. In the figure, a denotes a distinction over the previously executed option (in the order Navigate(p), pick-up and Drop-off), and other letters denote a distinction over the corresponding observation. Note that the V-Tree ofNavigate(l) did not make a distinction between x-positions in the lower part of the grid. In some places, for example in Navigate(4), the right branch of x, the algorithm made a suboptimal distinction. A distinction over y would have given a smaller number of leaves and would have been sufficient to represent an optimal policy. The V-Trees in the figure contain a total of 188 leaf nodes. Across 10 runs, the number of leaf nodes varied from 154 to 259, with an average of 189. Some leaf nodes were never visited, making the actual number of states even smaller. This is comparable to the results of Dietterich [2] who hand-coded a representation containing 106 states. Compared to the 500 distinct states in a flat representation of the task, or the 2,500 distinct states that the five policies would require without abstraction, our result is a significant improvement. Certainly, the memory required to store histories should also be taken into account. However, we believe that the memory savings due to option-specific state abstraction in larger tasks will significantly outweigh the memory requirement for V-Trees. 6 Conclusion We have shown that the V-Tree algorithm can be used with options in a hierarchical learning system. Our results suggest that automated option-specific state abstraction performed by the algorithm is an attractive approach to making hierarchical learning systems more effective. Although our testbed was small, we believe this is an important first step toward automated state abstraction in hierarchies. We also incorporated intra-option learning into the V-Tree algorithm, a method that allows a learning agent to extract more information from the training data. Results show that intra-option learning can significantly improve the performance of a learning agent performing option-specific state abstraction. Although our main motivation for developing a hierarchical version of the V-Tree algorithm was automating state abstraction, the new definition of a transition instance enables history to be structured hierarchically, something that is useful when learning to solve problems in partially observable domains. Future work will examine the performance of option-specific state abstraction using the V-Tree algorithm in larger, more realistic tasks. We also plan to develop a version of ~aVigate(l) ddc10~ x ~'(3) Figure 3: U-Trees for different policies the U-Tree algorithm that goes back in the tree and reconsiders distinctions. This has the potential to improve the performance of the algorithm by correcting nodes for which incorrect distinctions were made. Acknowledgments The authors would like to thank Tom Dietterich for providing code for the Taxi task, Andrew McCallum for valuable cOll'espondence regarding the U -Tree algorithm, and Ted Perkins for reading and providing helpful comments on the paper. This work was funded by the National Science Foundation under Grant No. ECS-9980062. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. References [1] Dietterich, T. (2000). Hierarchical reinforcement leaming with the MAXQ value function decomposition. Artificial Intelligence Research 13:227-303. [2] Dietterich, T. (2000) State Abstraction in MAXQ Hierarchical Reinforcement Learning. In S. A. Solla, T. K. Leen, and K.-R. Muller (eds.), Advances in Neural Information Processing Systems 12, pp. 994-1000. Cambridge MA: MIT Press. [3] Digney, B. (1996) Emergent hierarchical control structures: Leaming reactivelhierarchical relationships in reinforcement environments. In P. Meas and M. Mataric (eds.), From animals to animats 4. Cambridge MA: MIT Press. [4] McCallum, A. (1995) Reinforcement Learning with Selective Perception and Hidden State. PhD thesis, Computer Science DepaItment, University of Rochester. [5] PaI1', R., and Russell, S. (1998) Reinforcement leaming with hierarchies of machines. In M. 1. Jordan, M. J. Keams, and S. A. Solla (eds.), Advances in Neural Information Processing Systems 10, pp. 1043- 1049. Cambridge MA: MIT Press. [6] Precup, D., and Sutton, R. (1998) Multi-time models for temporally abstract planning. In M. 1. Jordan, M. J. Keams, and S. A. Solla (eds.), Advances in Neural Information Processing Systems 10, pp. 1050-1056. Cambridge MA: MIT Press. [7] Singh, S. (1992) Reinforcement leaming with a hierarchy of abstract models. In Proc. of the 10th National Con! on Artificial Intelligence, pp. 202-207. Menlo Park, CA: AAAI PresslMIT Press. [8] Sutton, R., Precup, D., and Singh, S. (1998) Intra-Option Leaming about Temporally Abstract Actions. In Proc. of the 15th Inti. Con! on Machine Learning, ICML'98, pp. 556-564. Morgan Kaufman. [9] Sutton, R., Precup, D., and Singh, S. (1999) Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence 112:181- 211. [10] Uther, w., and Veloso, M. (1997) Generalizing Adversarial Reinforcement Leaming. AAAI Fall Symposium on Model Directed Autonomous Systems.
2000
48
1,848
Structure learning in human causal induction Joshua B. Tenenbaum & Thomas L. Griffiths Department of Psychology Stanford University, Stanford, CA 94305 {jbt,gruffydd}@psych.stanford .edu Abstract We use graphical models to explore the question of how people learn simple causal relationships from data. The two leading psychological theories can both be seen as estimating the parameters of a fixed graph. We argue that a complete account of causal induction should also consider how people learn the underlying causal graph structure, and we propose to model this inductive process as a Bayesian inference. Our argument is supported through the discussion of three data sets. 1 Introduction Causality plays a central role in human mental life. Our behavior depends upon our understanding of the causal structure of our environment, and we are remarkably good at inferring causation from mere observation. Constructing formal models of causal induction is currently a major focus of attention in computer science [7], psychology [3,6], and philosophy [5]. This paper attempts to connect these literatures, by framing the debate between two major psychological theories in the computational language of graphical models. We show that existing theories equate human causal induction with maximum likelihood parameter estimation on a fixed graphical structure, and we argue that to fully account for human behavioral data, we must also postulate that people make Bayesian inferences about the underlying causal graph structure itself. Psychological models of causal induction address the question of how people learn associations between causes and effects, such as P ( C ----+ E), the probability that some event C causes outcome E. This question might seem trivial at first; why isn't P( C----+E) simply P( e+ Ic+), the conditional probability that E occurs (E = e+ as opposed to e-) given that C occurs? But consider the following scenarios. Three case studies have been done to evaluate the probability that certain chemicals, when injected into rats, cause certain genes to be expressed. In case 1, levels of gene 1 were measured in 100 rats injected with chemical 1, as well as in 100 uninjected rats; cases 2 and 3 were conducted likewise but with different chemicals and genes. In case 1, 40 out of 100 injected rats were found to have expressed the gene, while 0 out of 100 uninjected rats expressed the gene. We will denote these results as {40/100, O/IOO}. Case 2 produced the results {7/100, O/IOO}, while case 3 yielded {53/100, 46/100}. For each case, we would like to know the probability that the chemical causes the gene to be expressed, P ( C ----+ E), where C denotes the chemical and E denotes gene expression. People typically rate P( C----+E) highest for case 1, followed by case 2 and then case 3. In an experiment described below, these cases received mean ratings (on aO-20 scale) of14.9±.8, 8.6 ± .9, and 4.9 ± .7, respectively. Clearly P(C----+E) # P(e+lc+), because case 3 has the highest value of P(e+ Ic+) but receives the lowest rating for P(C----+E). The two leading psychological models of causal induction elaborate upon this basis in attempting to specify P(C----+E). The f:j.P model [6] claims that people estimate P(C----+E) according to (1) (We restrict our attention here to facilitatory causes, in which case f:j.P is always between ° and 1.) Equation 1 captures the intuition that C is perceived to cause E to the extent that C's occurence increases the likelihood of observing E. Recently, Cheng [3] has identified several shortcomings of f:j.P and proposed that P( C----+E)instead corresponds to causal power, the probability that C produces E in the absence of all other causes. Formally, the power model can be expressed as: f:j.P (2) power = 1 _ P(e+ lc)' There are a variety of normative arguments in favor of either of these models [3,7]. Empirically, however, neither model is fully adequate to explain human causal induction. We will present ample evidence for this claim below, but for now, the basic problem can be illustrated with the three scenarios above. While people rate P(C----+E) higher for case 2, {71100,OllOO}, than for case 3, {531100,4611O0}, f:j.p rates them equally and the power model ranks case 3 over case 2. To understand this discrepancy, we have to distinguish between two possible senses of P( C----+E): "the probability thatC causes E (on any given trial when C is present)" versus "the probability that C is a cause of E (in general, as opposed to being causally independent of E)". Our claim is that the f:j.P and power models concern only the former sense, while people's intuitions about P( C ----+ E) are often concerned with the latter. In our example, while the effect of C on any given trial in case 3 may be equal to (according to f:j.P) or stronger than (according to power) its effect in case 2, the general pattern of results seems more likely in case 2 than in case 3 to be due to a genuine causal influence, as opposed to a spurious correlation between random samples of two independent variables. In the following section, we formalize this distinction in terms of parameter estimation versus structure learning on a graphical model. Section 3 then compares two variants of our structure learning model with the parameter estimation models (f:j.P and power) in light of data from three experiments on human causal induction. 2 Graphical models of causal induction The language of causal graphical models provides a useful framework for thinking about people's causal intuitions [5,7]. All the induction models we consider here can be viewed as computations on a simple directed graph (Graphl in Figure 1). The effect node E is the child of two binary-valued parent nodes: C, the putative cause, and B, a constant background. Let X = (Cl , El)"'" (CN, EN) denote a sequence of N trials in which C and E are each observed to be present or absent; B is assumed to be present on all trials. (To keep notation concise in this section, we use 1 or ° in addition to + or - to denote presence or absence of an event, e.g. Ci = 1 if the cause is present on the ith trial.) Each parent node is associated with a parameter, W B or We, that defines the strength of its effect on E. In the !:l.P model, the probability of E occuring is a linear function of C: Q(e+ le; WB , w e ) = WB + we' e. (3) (We use Q to denote model probabilities and P for empirical probabilities in the sample X .) In the causal power model, as first shown by Glymour [5], E is a noisy-OR gate: (4) 2.1 Parameter inferences: !:l.P and Causal Power In this framework, both the!:l.P and power model's predictions for P (C- E ) can be seen as maximum likelihood estimates of the causal strength parameter We in Graph1' but under different parameterizations. For either model, the loglikelihood of the data is given by N 2:)og [(Q(eile; ))"'(I- Q(eile;))1-e. ] (5) N L ei log Q(ei lei ) + (1 - ei ) log(1 - Q( e; lei)) , (6) ; =: 1 where we have suppressed the dependence of Q(ei Ie;) on WB , We . Breaking this sum into four parts, one for each possible combination of {e+, e-} and {e+, e-} that could be observed, £(XI WB , w e) can be written as N P( e+) [P(e+ le+) log Q( e+ le+) + (1 - P(e+ le+)) log(1 - Q(e+ le+))] (7) + N P(e-) [P(e+le-)logQ(e+le- ) + (1- P(e+ le-))log(l- Q(e+ le-))] By the Information inequality [4], Equation 7 is maximized whenever WB and We can be chosen to make the model probabilities equal to the empirical probabilites: Q(e+ le+; WB , we ) Q(e+ le-; WB, w e ) P(e+le+), P(e+Ic-)· (8) (9) To show that the !:l.P model's predictions for P( C - E ) correspond to maximum likelihood estimates of We under a linear parameterization of Graph1' we identify We in Equation 3 with!:l.P (Equation 1), and WB with P ( e+ le-). Equation 3 then reduces to P(e+ le+) for the case e = e+ (i.e., e = 1) and to P( e+ le-) for the case e = e- (i.e., e = 0), thus satisfying the sufficient conditions in Equations 8-9 for W B and We to be maximum likelihood estimates. To show that the causal power model's predictions for P( C- E ) correspond to maximum likelihood estimates of We under a noisy-OR parameterization, we follow the analogous procedure: identify We in Equation 4 with power (Equation 2), and W B with P ( e+ le-). Then Equation 4 reduces to P( e+ le+) for e = e+ and to P( e+ le-) for e = e- , again satisfying the conditions for WB and We to be maximum likelihood estimates. 2.2 Structural inferences: Causal Support and x2 The central claim of this paper is that people's judgments of P( C - E) reflect something other than estimates of causal strength parameters - the quantities that we have just shown to be computed by !:l.P and the power model. Rather, people's judgments may correspond to inferences about the underlying causal structure, such as the probability that C is a direct cause of E. In terms of the graphical model in Figure 1, human causal induction may be focused on trying to distinguish between Graph), in which C is a parent of E, and the "null hypothesis" of Grapho, in which C is not. This structural inference can be formalized as a Bayesian decision. Let he be a binary variable indicating whether or not the link C -+ E exists in the true causal model responsible for generating our observations. We will assume a noisy-OR gate, and thus our model is closely related to causal power. However, we propose to model human estimates of P(C-+E ) as causal support, the log posterior odds in favor of Graphl (he = 1) over Grapho (he = 0): P(he = l 1X) support = log (h I ) . P e =O X (10) Via Bayes' rule, we can express P( he = l 1X) in terms of the marginal likelihood or evidence, P(X lhe = 1), and the prior probability that C is a cause of E , P(he = 1): P(he = l 1X) ex P(Xl he = I)P(he = 1). (11) For now, we take P(he = 1) = P(he = 0) = 1/2. Computing the evidence requires integrating the likelihood P( X I W B , we) over all possible values of the strength parameters: We take p( W B, We I he = 1) to be a uniform density, and we note that p( X I W B, we ) is simply the exponential of £(XI W8, we) as defined in Equation 5. P(Xl he = 0), the marginal likelihood for Grapho, is computed similarly, but with the prior P(WB , wei he = 1) in Equation 12 replaced by p( WB Ihe = 0)8( we ). We again take p( WB Ihe = 0) to be uniform. The Dirac delta distribution on We = 0 enforces the restriction that the C -+ E link is absent. By making these assumptions, we eliminate the need for any free numerical parameters in our probabilistic model (in contrast to a similar Bayesian account proposed by Anderson [1]). Because causal support depends on the full likelihood functions for both Graph) and Grapho, we may expect the support model to be modulated by causal power - which is based strictly on the likelihood maximum estimate for Graphl - but only in interaction with other factors that determine how much of the posterior probability mass for We in Graph) is bounded away from zero (where it is pinned in Grapho). In general, evaluating causal support may require fairly involved computations, but in the limit of large N and weak causal strength We, it can be approximated by the familiar X2 statistic for independence, N L c,e (P( c ' ~o(~~~c , e))2 . Here Po(c, e) = P(c)P(e) is the factorized approximation to P(c, e), which assumes C and E to be independent (as they are in Grapho). 3 Comparison with experiments In this section we examine the strengths and weaknesses of the two parameter inference models, !::..P and causal power, and the two structural inference models, causal support and X2, as accounts of data from three behavioral experiments, each designed to address different aspects of human causal induction. To compensate for possible nonlinearities in people's use of numerical rating scales on these tasks, all model predictions have been scaled by power-law transformations, f( x) = sign(x) Ixl'Y, with I chosen separately for each model and each data set to maximize their linear correlation. In the figures, predictions are expressed over the same range as the data, with minimum and maximum values aligned. Figure 2 presents data from a study by Buehner & Cheng [2], designed to contrast the predictions of flP and causal power. People judged P ( C -+ E) for hypothetical medical studies much like the gene expression scenarios described above, seeing eight cases in which C occurred and eight in which C did not occur. Some trends in the data are clearly captured by the causal power model but not by flP, such as the monotonic decrease in P( C -+ E ) from {l.00, 0.75} to {.25, O.OO}, as flP stays constant but P(e+lc-) (and hence power) decreases (columns 6-9). Other trends are clearly captured by flP but not by the power model, like the monotonic increase in P(C-+E ) as P(e+ Ic+) stays constant at 1.0 but P(e+ Ic-) decreases, from {l.00, l.00} to {l.00, O.OO} (columns 1, 6, 10, 13, 15). However, one of the most salient trends is captured by neither model: the decrease in P(C-+E ) as flP stays constant at 0 but P (e+lc-) decreases (columns 1-5). The causal support model predicts this decrease, as well as the other trends. The intuition behind the model's predictions for flP = 0 is that decreasing the base rate P(e+ Ic-) increases the opportunity to observe the cause's influence and thus increases the statistical force behind the inference that C does not cause E , given flP = O. This effect is most obvious when P(e+lc+) = P(e+lc-) = 1, yielding a ceiling effect with no statistical leverage [3], but also occurs to a lesser extent for P (e+ Ie) < 1. While x2 generally approximates the support model rather well, italso fails to explain the cases with P( e+ Ic+) = P(e+ Ic-), which always yield X2 = O. The superior fit of the support model is reflected in its correlation with the data, giving R2 = 0.95 while the power, flP , and x2models gave R2 values of 0.81,0.82, and 0.82 respectively. Figure 3 shows results from an experiment conducted by Lober and Shanks [6], designed to explore the trend in Buehner and Cheng's experiment that was predicted by flP but not by the power model. Columns 4-7 replicated the monotonic increase in P ( C -+ E ) when P(e+lc+) remains constant at 1.0 but P(e+lc-) decreases, this time with 28 cases in which C occurred and 28 in which C did not occur. Columns 1-3 show a second situation in which the predictions of the power model are constant, but judgements of P ( C -+ E ) increase. Columns 8-10 feature three scenarios with equal flP, for which the causal power model predicts a decreasing trend. These effects were explored by presenting a total of 60 trials, rather than the 56 used in Columns 4-7. For each of these trends the flP model outperforms the causal power model, with overall R2 values of 0.95 and 0.35 respectively. However, it is important to note that the responses of the human subjects in columns 8-10 (contingencies {l.00, 0.50} , {0.80, 0.40}, {0.40, O.OO}) are not quite consistent with the predictions of flP : they show a slight V-shaped non-linearity, with P( C -+E) judged to be smaller for 0.80, 0.40 than for either of the extreme cases. This trend is predicted by the causal support model and its X2 approximation, however, which both give the slightly better R 2 of 0.99. Figure 4 shows data that we collected in a similar survey, aiming to explore this non-linear effect in greater depth. 35 students in an introductory psychology class completed the survey for partial course credit. They each provided a judgment of P ( C -+ E ) in 14 different medical scenarios, where information about P( e+ Ic+) and P(e+ Ic-) was provided in terms of how many mice from a sample of 100 expressed a particular gene. Columns 1-3, 5-7, and 9-11 show contingency structures designed to elicit V-shaped trends in P ( C -+ E ). Columns 4 and 8 give intermediate values, also consistent with the observed non-linearity. Column 14 attempted to explore the effects of manipulating sample size, with a contingency structure of {7/7, 93/193}. In each case, we observed the predicted nonlinearity: in a set of situations with the same flP , the situations involving less extreme probabilities show reduced judgments of P( C-+E ) . These non-linearities are not consistent with the flP model, but are predicted by both causal support and X2 . I:!.P actually achieves a correlation comparable to X2 (R2 = 0.92 for both models) because the non-linear effects contribute only weakly to the total variance. The support model gives a slightly worse fit than X2, R2 = 0.80, while the power model gives a poor account of the data, R2 = 0.38. 4 Conclusions and future directions In each of the studies above, the structural inference models based on causal support or X2 consistently outperformed the parameter estimation models, I:!.P and causal power. While causal power and I:!.P were each capable of capturing certain trends in the data, causal support was the only model capable of predicting all the trends. For the third data set, X2 provided a significantly better fit to the data than did causal support. This finding merits future investigation in a study designed to tease apart X2 and causal support; in any case, due to the close relationship between the two models, this result does not undermine our claim that probabilistic structural inferences are central to human causal induction. One unique advantage of the Bayesian causal support model is its ability to draw inferences from very few observations. We have begun a line of experiments, inspired by Gopnik, Sobel & Glymour (submitted), to examine how adults revise their causal judgments when given only one or two observations, rather than the large samples used in the above studies. In one study, subjects were faced with a machine that would inform them whether a pencil placed upon it contained "supedead" or ordinary lead. Subjects were either given prior knowledge that superlead was rare or that it was common. They were then given two pencils, analogous to B and C in Figure 1, and asked to rate how likely these pencils were to have supedead, that is, to cause the detector to activate. Mean responses reflected the induced prior. Next, they were shown that the superlead detector responded when Band C were tested together, and their causal ratings of both Band C increased. Finally, they were shown that B set off the supedead detector on its own, and causal ratings of B increased to ceiling while ratings of C returned to their prior levels. This situation is exactly analogous to that explored in the medical tasks described above, and people were able to perform accurate causal inductions given only one trial of each type. Of the models we have considered, only Bayesian causal support can explain this behavior, by allowing the prior in Equation 11 to adapt depending on whether superlead is rare or common. We also hope to look at inferences about more complex causal structures, including those with hidden variables. With just a single cause, causal support and X2 are highly correlated, but with more complex structures, the Bayesian computation of causal support becomes increasingly intractable while the X2 approximation becomes less accurate. Through experiments with more complex structures, we hope to discover where and how human causal induction strikes a balance between ponderous rationality and efficient heuristic. Finally, we should stress that despite the superior performance of the structural inference models here, in many situations estimating causal strength parameters is likely to be just as important as inferring causal structure. Our hope is that by using graphical models to relate and extend upon existing accounts of causal induction, we have provided a framework for exploring the interplay between the different kinds of judgments that people make. References [1] J. Anderson (1990). The adaptive character of thought. Erlbaum. [2] M. Buehner & P. Cheng (1997) Causal induction; The power PC theory versus the RescorlaWagner theory. In Proceedings of the 19th Annual Conference of the Cognitive Science Society. [3] P. Cheng (1997). From covariation to causation: A causal power theory. Psychological Review 104,367-405. [4] T. Cover & J .Thomas (1991). Elements of information theory. Wiley. [5] C. Glymour (1998). Learning causes: Psychological explanations of causal explanation. Minds and Machines 8, 39-60. [6] K. Lober & D. Shanks (2000). Is causal induction based on causal power? Critique of Cheng (1997). Psychological Review 107, 195-212. [7] J. Pearl (2000). Causality. Cambridge University Press. Graph 1 (he = 1) Grapho (he = 0) ~c ~~c Model Form of P(elb,c) P(C->E) M' Linear we Power Noisy OR-gate we Support Noisy OR-gate I P(he=l) og __ P(he = O) Figure 1: Different theories of human causal induction expressed as different operations on a simple graphical model. The M' and power models correspond to maximum likelihood parameter estimates on a fixed graph (Graph[), while the support model corresponds to a (Bayesian) inference about which graph is the true causal structure. p(e+lc+) 090 080 070 100 100 100 100 100 080 040 090 p(e+lc-) 066033 000 075 050 025 000 060 040 000 083 '::or I II Humans [.... . ... 1 • • 1.111 ••• :.: I 11111111 • .::w.:. I II Support •• 1.1 I ••• I •• I.III ••• ~ Figure 3: Computational models compared with the performance of human participants from Lober and Shanks [5], Experiments 4-6. P(e+lc+) 100 075 050 025 000 1 aD 075 050 025100075050100 075 1 aD P(e+lc-) 100 075 050 025 000075050 025 000 050 025 000 025 000 000 '°5°:[ Humans 1 ••• __ •••• 1 •• 11 1_.:: ___ ••••••• 111 I N/ • .::. __ I ••• II.III I Support 1 ._ •• _ ••••••• 11 I _..:' ___ ••••••• 111 Figure 2: Computational models compared with the performance of human participants from Buehner and Cheng [1], Experiment lB. Numbers along the top of the figure show stimulus contingencies. p(e+lc+) 040 070 1 00 090 007 053 1 00 074 002 051 1 00 0 10 1 00 1 00 P(e+lc-) 0 00 030 060 083 0 00 046 093 072 0 00 049 098 0 10 1 00 048 20f I Humans ,: ............... ___ . ~p I III........ _ [ 1111 •• I ••• I,:w::.1 II I Support . .. _._-_. .. I i III........ • Figure 4: Computational models compared with the performance of human participants on a set of stimuli designed to elicit the non-monotonic trends shown in the data of Lober and Shanks [5].
2000
49
1,849
Color Opponency Constitutes A Sparse Representation For the Chromatic Structure of Natural Scenes Te-Won Lee; Thomas Wachtler and Terrence Sejnowski Institute for Neural Computation, University of California, San Diego & Computational Neurobiology Laboratory, The Salk Institute 10010 N. Torrey Pines Road La Jolla, California 92037, USA {tewon,thomas,terry}~salk.edu Abstract The human visual system encodes the chromatic signals conveyed by the three types of retinal cone photoreceptors in an opponent fashion. This color opponency has been shown to constitute an efficient encoding by spectral decorrelation of the receptor signals. We analyze the spatial and chromatic structure of natural scenes by decomposing the spectral images into a set of linear basis functions such that they constitute a representation with minimal redundancy. Independent component analysis finds the basis functions that transforms the spatiochromatic data such that the outputs (activations) are statistically as independent as possible, i.e. least redundant. The resulting basis functions show strong opponency along an achromatic direction (luminance edges), along a blueyellow direction, and along a red-blue direction. Furthermore, the resulting activations have very sparse distributions, suggesting that the use of color opponency in the human visual system achieves a highly efficient representation of colors. Our findings suggest that color opponency is a result of the properties of natural spectra and not solely a consequence of the overlapping cone spectral sensitivities. 1 Statistical structure of natural scenes Efficient encoding of visual sensory information is an important task for information processing systems and its study may provide insights into coding principles of biological visual systems. An important goal of sensory information processing Electronic version available at www. cnl. salk . edu/ ""tewon. is to transform the input signals such that the redundancy between the inputs is reduced. In natural scenes, the image intensity is highly predictable from neighboring measurements and an efficient representation preserves the information while the neuronal output is minimized. Recently, several methods have been proposed for finding efficient codes for achromatic images of natural scenes [1, 2, 3, 4]. While luminance dominates the structure of the visual world, color vision provides important additional information about our environment. Therefore, we are interested in efficient, i.e. redundancy reducing representations for the chromatic structure of natural scenes. 2 Learning efficient representation for chromatic image Our goal was to find efficient representations of the chromatic sensory information such that its spatial and chromatic redundancy is reduced significantly. The method we used for finding statistically efficient representations is independent component analysis (ICA). ICA is a way of finding a linear non-orthogonal co-ordinate system in multivariate data that minimizes mutual information among the axial projections of the data. The directions of the axes of this co-ordinate system (basis functions) are determined by both second and higher-order statistics of the original data, compared to Principal Component Analysis (PCA) which is used solely in second order statistics and has orthogonal basis functions. The goal of ICA is to perform a linear transform which makes the resulting source outputs as statistically independent from each other as possible [5]. ICA assumes an unknown source vector s with mutually independent components Si. A small patch of the observed image is stretched into a vector x that can be represented as a linear combination of sources components Si such that x=As, (1) where A is a scalar square matrix and the columns of A are the basis functions. Since A and s are unknown the goal of ICA is to adapt the basis functions by estimating s so that the individual components Si are statistically independent and this adaptation process minimizes the mutual information between the components Si. A learning algorithm can be derived using the information maximization principle [5] or the maximum likelihood estimation (MLE) method which can be shown to be equivalent in this case. In our experiments, we used the infomax learning rule with natural gradient extension and the learning algorithm for the basis functions is (2) where I is the identity matrix, rp(s) = - 8p~(W3s and sT denotes the matrix transpose of s . .6.A is the change of the basis functions that is added to A. The change in .6.A will converge to zero once the adaptation process is complete. Note that rp(s) requires a density model for p(Si). We used a parametric exponential power density P(Si) ex exp( -ISilqi) and simultaneously updated its shape by inferring the value qi to match the distribution of the estimated sources [6]. This is accomplished by finding the maximum posteriori value of qi given the observed data. The ICA algorithm can thus characterize a wide class of statistical distributions including uniform, Gaussian, Laplacian, and other so-called sub- and super-Gaussian densities. In other words, our experiments do not constrain the coefficients to have a a) b) 700 < [n m[ Figure 1: Linear decomposition of an observed spectral image patch into its basis functions. sparse distribution, unlike some previous methods [1, 2]. The algorithm converged to a solution of maximal independence and the distributions of the coefficients were approximated by exponential power densities. We investigated samples of spectral images of natural scenes as illustrated in Figure 1. We analyzed a set of hyperspectral images [7] with a size of 256 x 256 pixels. Each pixel is represented by radiance values for 31 wavebands of 10 nm width, sampled in 10 nm steps between 400 and 700 nm. The pixel size corresponds to 0.056xO.056 deg of visual angle. The images were recorded around Bristol, either outdoors, or inside the glass houses of Bristol Botanical Gardens. We chose eight of these images which had been obtained outdoors under apparently different illumination conditions. The vector of 31 spectral radiance values of each pixel was converted to a vector of 3 cone excitation values whose components were the inner products of the radiance vector with the vectors of L-, M-, and S-cone sensitivity values [8], respectively. From the entire image data set, 7x7 pixel image patches were chosen randomly, yielding 7x7x3 = 147 dimensional vectors. The learning process was done in 500 steps, each using a set of spectra of 40000 image patches, 5000 chosen randomly from each of the eight images. A set of basis functions for 7x7 pixel patches was obtained, with each pixel containing the logarithms of the excitations of the three human cone photo receptors that represented the receptor signals in the human retina [8, 9]. To visualize the learned basis functions, we used the method by Ruderman et al. [9] and plotted for each basis function a 7 x 7 pixel matrix, with the color of each pixel indicating the combination of L, M, and S cone responses as follows. The values for each patch were normalized to values between a and 255, with a cone excitation corresponding to a value of 128. Thus, the R, G, and B components of each pixel represent the relative excitations of L, M, and S cones, respectively. To further illustrate the chromatic properties of the basis functions, we convert the L, M, S vector of each pixel to its projection onto the isoluminant plane of a cone-opponent color space similar to the color spaces of MacLeod and Boynton[lO] and Derrington et al[l1]. In our plots, the horizontal axis corresponds to the response of an L cone versus M cone opponent mechanism, the vertical axis corresponds to S cone modulation. For each pixel of the basis functions, a point is plotted at its corresponding location in that color space. The color of the points are the same as used for the pixels in the top part of the figure. Thus, although only the projection onto the isoluminant plane is shown, the third dimension (i.e., luminance) can be inferred by the brightness of the points. Figure 2a shows the learned leA basis functions in a pseudo color representation. Figure 2b shows the color space coordinates of the chromaticities of the pixels in each basis function. The peA basis functions and their corresponding color space coordinates are shown in Figure 2c and 2d respectively. Both representations are in order of decreasing L2-norm. The peA results show a global spatial representation and their opponent basis functions lie mostly along the coordinate axes of the cone-opponent color space. In addition, there are functions that imply mixtures of non-opponent colors. In contrast to peA basis functions, the leA basis functions are localized and oriented. When ordered by decreasing L2-norm, achromatic basis functions tend to appear before chromatic basis functions. This reflects the fact that in the natural environment, luminance variations are generally larger than chromatic variations [7]. The achromatic basis functions are localized and oriented, similar to those found in the analysis of grayscale natural images [1, 2]. Most ofthe chromatic basis functions, particularly those with strong contributions, are color opponent, i.e., the chromaticities of their pixels lie roughly along a line through the origin of our color space. Most chromatic basis functions with relatively high contributions are modulated between light blue and dark yellow, in the plane defined by luminance and S-cone modulation. Those with lower L2-norm are highly localized, but still are mostly oriented. There are other chromatic basis functions with tilted orientations, corresponding to blue versus orange colors. The chromaticities of these basis functions occupy mainly the second and fourth quadrant. The basis functions with lowest contributions are less strictly aligned in color space, but still tend to be color opponent, mostly along a bluish-green/orange direction. There are no basis functions with chromaticities along the horizontal axis, corresponding to pure L versus M cone opponency, like peA basis functions in Figure 2d [9]. The tilted orientations of the opponency axes most likely reflects the distribution of the chromaticities in our images. In natural images, L-M and S coordinates in our color space are negatively correlated [12]. leA finds the directions that correspond to maximally decorrelated signals, i.e. extracts statistical structure of the inputs. peA did not yield basis functions in these directions, probably because it is limited by the orthogonality constraint. While it is known that chromatic properties of neurons in the lateral geniculate nucleus (LGN) of primates correspond to variations along the axes of cone-opponency ('cardinal axes') [11], cortical neurons show sensitivities for intermediate directions [13]. Since the results of peA and leA, respectively, match these differences qualitatively, we suspect that opponent coding along the 'cardinal directions' of cone opponency is used by the visual system to transmit reliably visual information to the cortex, where the information is recoded in order to better reflect the statistical structure of the environment [14]. 3 Discussion This result shows that the independence criterion alone is sufficient to learn efficient image codes. Although no sparseness constraint was used, the obtained coefficients are extremely sparse, i.e. the data x are encoded in the sources s in such a way that the coefficients of s are mostly around zero; there is only a small percentage of informative values (non-zero coefficients). From an information coding perspective this assumes that we can encode and decode the chromatic image patches with only a small percentage of the basis functions. In contrast, Gaussian densities are not sparsely distributed and a large portion of the basis functions is required to represent the chromatic images. The normalized kurtosis value is one measure of sparseness and the average kurtosis value was 19.7 for leA, and 6.6 for peA. Interestingly the basis functions in Figure2a produced only sparse coefficients except for basis function 7 (green basis function) that resulted in a nearly uniform distribution, suggesting that this basis function is active almost all the time. The reason may be that a green color component is present in almost all image patches of the natural scenes. We repeated the experiment with different leA methods and obtained similar results. The basis functions obtained with the exponential power distributions or the simple Laplacian prior were statistically most efficient. In this sense, the basis functions that produce sparse distributions are statistically efficient codes. To quantitatively measure the encoding difference we compared the coding efficiency between leA and peA using Shannon's theorem to obtain a lower bound on the number of bits required to encode a spatiochromatic pattern [4]. The average number of bits required to encode 40000 patches randomly selected from the 8 images in Figure 1 with a fixed noise coding precision of O'x = 0.059 was 1.73 bits for leA and 4.46 bits for peA. Note that the encoding difference for achromatic image patches using leA and peA is about 20% in favor of leA [4]. The encoding difference in the chromatic case is significantly higher (> 100%) and suggests that there is a large amount of chromatic redundancy in the natural scenes. To verify our findings, we computed the average pairwise mutual information f in the original data (Ix = 0.1522), the peA representation (IPCA = 0.0123) and the leA representation (fICA = 0.0093). leA was able to further reduce the redundancy between its components, and its basis functions therefore represent more efficient codes. In general, the leA results support the argument that basis functions for efficient coding of chromatic natural images are non-orthogonal. In order to determine whether the color opponency is merely a result of correlation in the receptor signals due to the strong overlap of the photoreceptor sensitivities [15], we repeated the analysis, this time assuming hypothetical receptor sensitivities which do not overlap, but sample roughly in the same regions as the L-, M-, and S- cones. We used rectangular sensitivities with absorptions between 420 and 480 nm ("S"), 490 and 550 nm ("M"), and 560 and 620 nm ("L"), respectively. The resulting basis functions were as strongly color opponent as for the case of overlapping cone sensitivities. This suggests that the correlations of radiance values in natural spectra are (b) EBEBEEEBEBEBEEEBEEEB EBEBEEEBEEEBEBEBEBEE EErnrnrnEEBJOJOJEEEB EBrnrnrnrnrnEEBJEEEB EBEBEB+ ~ EElEElrnEEl EBEBEB I ~ ~ EElEEEElEB tIJrnEEEE ~ EEEElEflEEl ~B1EElEEl ~ ~ EEEEEEEljEE ~,~ -f. EEEBEEEE~EEEElEB EE ~I~ ,~ EElEBrn EEl '~ EBa1EB ~ Bj~tB 5j ~8j~ rn ffi~~ ~ ·m-+t' (d) EElEEEflEElEBEEEBEElEBEB EEEBEBEEEBEEEBEEEEEE EBEEEBEBEBEEEBEEEEEE EEEBEBEBrnEEEBEEEEEB EEEBEEEErnEBEEEBEEEE EBEEEEEBEBEEEEEEEBEB !~ EE ~ • EEEBt13 EE EEJEEEE EB ~ tE~~ ffitijtljEE EElrnEEl EB~tIl~~EEEEm~B:l I m ~mB:l rn I rnrnrn B:l I ~rnB:l ~838:jB:l 83 ' Figure 2: (a) 147 total lCA spatiochromatic structure of basis functions (7 by 7 pixels and 3 colors) are shown in order of decreasing L2-norm, from top to bottom and left to right. The R, G, and B values of the color of each pixel correspond to the relative excitation of L-, M-, and S-cones, respectively. (b) Chromaticities of the lCA basis functions, plotted in cone-opponent color space coordinates. Each dot represents the coordinate of a pixel of the respective basis function, projected onto the isoluminant plane. Luminance can be inferred from the brightness of the dot. Horizontal axes: L- versus M-cone variation. Vertical axes: S-cone variation. (c) 147 PCA spatiochromatic basis functions and (d) Corresponding PCA chromaticities. sufficiently high to require a color opponent code in order to represent the chromatic structure efficiently. In summary, our findings strongly suggest color opponency is not a mere consequence of the overlapping cone spectral sensitivities but moreover an attempt to represent the intrinsic spatiochromatic structure of natural scenes in a statistically efficient manner. References [1] B. Olshausen and D. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607- 609, 1996. [2] A. J. Bell and T. J. Sejnowski. The 'independent components' of natural scenes are edge filters. Vision Research, 37(23):3327- 3338, 1997. [3] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex. Proc.R.Soc.Lond. B, 265:359- 366, 1998. [4] M.S. Lewicki and B. Olshausen. A probablistic framwork for the adaptation and comparison of image codes. J. Opt.Soc., A: Optics, Image Science and Vision, in press, 1999. [5] A. J. Bell and T. J. Sejnowski. An Information-Maximization Approach to Blind Separation and Blind Deconvolution. Neural Computation, 7:1129-1159, 1995. [6] M.S. Lewicki. A flexible prior for independent component analysis. Neural Computation, submitted, 2000. [7] C. A. par-raga, G. Brelstaff, and T. Troscianko. Color and luminance information in natural scenes. Journal of the Optical Society of America A, 15:563- 569, 1998. (http://www.crs4.it/ ...... gjb/ftpJOSA.html). [8] A. Stockman, D. I. A. MacLeod, and N. E. Johnson. Spectral sensitivities of the human cones. Journal of the Optical Society of America A, 10:2491- 2521, 1993. (http://www-cvrl.ucsd.edu). [9] D. L. Ruderman, T. W. Cronin, and C.-C. Chiao. Statistics of cone responses to natural images: Implications for visual coding. Journal of the Optical Society of America A, 15:2036- 2045, 1998. [10] D. I. A. MacLeod and R. M. Boynton. Chromaticity diagram showing cone excitation by stimuli of equal luminance. Journal of the Optical Society of America, 69:11831186, 1979. [11] A. M. Derrington, J. Krauskopf, and P. Lennie. Chromatic mechanisms in lateral geniculate nucleus of macaque. Journal of Physiology, 357:241- 265, 1984. [12] D. I. A. MacLeod and T. von der Twer. The pleistochrome: Optimal opponent codes for natural colors. Preprint, 1998. [13] P. Lennie, J. Krauskopf, and G. Sclar. Chromatic mechanisms in striate cortex of macaque. Journal of Neuroscience, 10:649- 669, 1990. [14] D. J. Field. What is the goal of sensory coding? Neural Computation, 6:559- 601 , 1994. [15] G. Buchsbaum and A. Gottschalk. Trichromacy, opponent colours coding and optimum colour information transmission in the retina. Proceedings of the Royal Society London B, 220:89- 113, 1983.
2000
5
1,850
Minimum Bayes Error Feature Selection for Continuous Speech Recognition George Saon and Mukund Padmanabhan IBM T. 1. Watson Research Center, Yorktown Heights, NY, 10598 E-mail: {saon.mukund}@watson.ibm.com. Phone: (914)-945-2985 Abstract We consider the problem of designing a linear transformation () E lRPx n, of rank p ~ n, which projects the features of a classifier x E lRn onto y = ()x E lRP such as to achieve minimum Bayes error (or probability of misclassification). Two avenues will be explored: the first is to maximize the ()-average divergence between the class densities and the second is to minimize the union Bhattacharyya bound in the range of (). While both approaches yield similar performance in practice, they outperform standard LDA features and show a 10% relative improvement in the word error rate over state-of-the-art cepstral features on a large vocabulary telephony speech recognition task. 1 Introduction Modern speech recognition systems use cepstral features characterizing the short-term spectrum of the speech signal for classifying frames into phonetic classes. These features are augmented with dynamic information from the adjacent frames to capture transient spectral events in the signal. What is commonly referred to as MFCC+~ + ~~ features consist in "static" mel-frequency cepstral coefficients (usually 13) plus their first and second order derivatives computed over a sliding window of typicaJly 9 consecutive frames yielding 39-dimensional feature vectors every IOms. One major drawback of this front-end scheme is that the same computation is performed regardless of the application, channel conditions, speaker variability, etc. In recent years, an alternative feature extraction procedure based on discriminant techniques has emerged: the consecutive cepstral frames are spliced together forming a supervector which is then projected down to a manageable dimension. One of the most popular objective functions for designing the feature space projection is linear discriminant analysis. LDA [2, 3] is a standard technique in statistical pattern classification for dimensionality reduction with a minimal loss in discrimination. Its application to speech recognition has shown consistent gains for small vocabulary tasks and mixed results for large vocabulary applications [4,6]. Recently, there has been an interest in extending LDA to heteroscedastic discriminant analysis (HDA) by incorporating the individual class covariances in the objective function [6, 8]. Indeed, the equal class covariance assumption made by LDA does not always hold true in practice making the LDA solution highly suboptimal for specific cases [8]. However, since both LDA and HDA are heuristics, they do not guarantee an optimal projection in the sense of a minimum Bayes classification error. The aim of this paper is to study feature space projections according to objective functions which are more intimately linked to the probability of misclassification. More specifically, we will define the probability of misclassification in the original space, E, and in the projected space, E(J, and give conditions under which E(J = Eo Since after a projection y = Ox discrimination information is usually lost, the Bayes error in the projected space will always increase, that is E(J ~ E therefore minimizing E(J amounts to finding 0 for which the equality case holds. An alternative approach is to define an upper bound on E(J and to directly minimize this bound. The paper is organized as follows: in section 2 we recall the definition of the Bayes error rate and its link to the divergence and the Bhattacharyya bound, section 3 deals with the experiments and results and section 4 provides a final discussion. 2 Bayes error, divergence and Bhattacharyya bound 2.1 Bayes error Consider the general problem of classifying an n-dimensional vector x into one of C distinct classes. Let each class i be characterized by its own prior Ai and probability density function Pi, i = 1, ... ,C. Suppose x is classified as belonging to class j through the Bayes assignment j = argmax1 <i<C AiPi (x). The expected error rate for this classifier is called Bayes error [3] or probability of misclassification and is defined as (1) Suppose next that we wish to perform the linear transformation f : R n -t RP, y = f(x) = Ox, with 0 a P x n matrix of rank P :::; n. Moreover, let us denote by p~ the transformed density for class i. The Bayes error in the range of 0 now becomes (2) Since the transformation y = Ox produces a vector whose coefficients are linear combinations of the input vector x, it can be shown [1] that, in general, information is lost and E(J ~ E. For a fixed p, the feature selection problem can be stated as finding fj such that fj = argmin E(J (3) (JERPxn, rank((J)=p We will take however an indirect approach to (3): by maximizing the average pairwise divergence and relating it to E(J (subsection 2.2) and by minimizing the union Bhattacharyya bound on E(J (subsection 2.3). 2.2 Interclass divergence Since Kullback [5], the symmetric divergence between class i and j is given by ..!. Pi(X) Pj(x) D(~,J) = pi(x)log-(-) +pj(x)log-(-)dx Rn Pj X Pi X (4) D(i, j) represents a measure of the degree of difficulty of discriminating between the classes (the larger the divergence, the greater the separability between the classes). Similarly, one can define Do(i,j), the pairwise divergence in the range of (). Kullback [5] showed that Do(i,j) :::; D(i,j). If the equality case holds, () is called a sufficient statistic for discrimination. The average pairwise divergence is defined as D = C(J-1) L1:<:;i<j:<:;C D(i,j) and respectively Do = C(J-1) L1:<:;i<j:<:;C Do(i,j). It follows that Do :::; D. The next theorem due to Decell [1] provides a link between Bayes error and divergence for classes with uniform priors A1 = ... = AC (= b). Theorem [Decell'72] If Do = D then EO = E. The main idea of the proof is to show that if the divergences are the same then the Bayes assignment is preserved because the likelihood ratios are preserved almost everywhere: P; ((x)) = pl ((OOx)) ' i :j:. j. The result follows by noting that for any measurable set A c R P P3 X P j X ! pf(y)dy = r Pi (x)dx A iO - l(A) (5) where ()-1 (A) = {x E Rnl()x E A}. The previous theorem provides a basis for selecting () such as to maximize Do. Let us make next the assumption that each class i is normally distributed with mean /-Li and covariance ~i, that is Pi (x) = N(x; /-Li, ~i) andpf (y) = N(y; ()/-Li, ()~i()T), i = 1, ... , c. It is straightforward to show that in this case the divergence is given by 1 D(i,j) = 2" trace{~i1[~j + (/-Li - /-Lj)(/-Li - /-Lj)T] +~j1[~i+ (/-Li - /-Lj)(/-Li - /-Lj)T]}-n Thus, the objective function to be maximized becomes C 1 '"' T -1 T Do = C(C -1) trace{~(()~i()) ()Si()} - P where Si = L ~j + (/-Li - /-Lj)(/-Li - /-Ljf, i = 1, ... , C. #i (6) (7) Following matrix differentiation results from [9], the gradient of Do with respect to () has the expression Unfortunately, it turns out that 8f// = a has no analytical solutions for the stationary points. Instead, one has to use numerical optimization routines for the maximization of D(). 2.3 Bhattacharyya bound An alternative way of minimizing the Bayes error is to minimize an upper bound on this quantity. We will first prove the following statement (9) Indeed, from (1), the Bayes error can be rewritten as (10) and for every x, there exists a permutation of the indices Ux : {l, ... , C} -+ {l, ... , C} such that the terms A1P1 (x), ... , ACPC (x) are sorted in increasing order, i.e. Aux (1)Pux (1) (x) S ... S Aux(C)Pux(C)(x) . Moreover, for 1 S k S C - 1 from which follows that C-1 C-1 l~~cLAjPj(X) = L Aux (k)Pux (k) (x) s L AUx(k)Pux(k) (X)Aux (k+1)Pux (k+1) (x) j#i k=l k=l < L VAiPi (X)Ajpj (x) l::;i<j::;C (12) which, when integrated over :R n, leads to (9). As previously, if we assume that the Pi'S are normal distributions with means JLi and covariances ~i, the bound given by the right-hand side of (9) has the closed form expression L VAiAje-P(i,j) (13) 19<j::;C where (14) is called the Bhattacharyya distance between the normal distributions Pi and Pj [3]. Similarly, one can define Po (i, j), the Bhattacharyya distance between the projected densities pf and p~. Combining (9) and (13), one obtains the following inequality involving the Bayes error rate in the projected space EO ~ L ..jAiAje- P9(i,j) (= Bo) 15,i<j5,C It is necessary at this point to introduce the following simplifying notations: • Bij = ~ (JLi - JLj )(JLi - JLj) T and • Wij = ~(~i + ~j), 1 ~ i < j ~ c. From (14), it follows that (15) po(i,j) = -21 trace{(OWijOT)-lOBijOT} + -21 log IOWijOTI (16) ..jIO~iOTIIO~jOTI and the gradient of Bo with respect to 0 is aBo aO = with, again by making use of differentiation results from [9] (17) apo(i,j) aO 1 = "2 (OWijOT)-l[OBijOT(OWijOT)-lOWij - OBij ] + (OWijOT)-lOWij-~[(O~iOT)-lO~i + (O~jOT)-lO~j] (18) 3 Experiments and results The speech recognition experiments were conducted on a voicemail transcription task [7]. The baseline system has 2.3K context dependent HMM states and 134K diagonal gaussian mixture components and was trained on approximately 70 hours of data. The test set consists of 86 messages (approximately 7000 words). The baseline system uses 39dimensional frames (13 cepstral coefficients plus deltas and double deltas computed from 9 consecutive frames). For the divergence and Bhattacharyya projections, every 9 consecutive 24-dimensional cepstral vectors were spliced together forming 216-dimensional feature vectors which were then clustered to estimate 1 full covariance gaussian density for each state. Subsequently, a 39x216 transformation 0 was computed using the objective functions for the divergence (7) and the Bhattacharyya bound (15), which projected the models and feature space down to 39 dimensions. As mentioned in [4], it is not clear what the most appropriate class definition for the projections should be. The best results were obtained by considering each individual HMM state as a separate class, with the priors of the gaussians summing up to one across states. Both optimizations were initialized with the LDA matrix and carried out using a conjugate gradient descent routine with user supplied analytic gradient from the NAG1 Fortran library. The routine performs an iterative update of the inverse of the hessian of the objective function by accumulating curvature information during the optimization. Figure 1 shows the evolution of the objective functions for the divergence and the Bhattacharyya bound. 300 "dlvg.dat" -250 .+. 200 ~ c ~ ~ ~ . ~ " 150 ~ ro ~ 2 .E 100 .. + 50 .+. . .. ~. 0 0 10 15 20 25 30 35 Iteration "bhatta.dat" -5.9 5.B § 0 5.7 .0 ro ~ ~ ~ 5.6 j!l In 5.5 5.4 5.3 0 20 40 60 BO 100 Iteration Figure 1: Evolution of the objective functions. The parameters of the baseline system (with 134K gaussians) were then re-estimated in the transformed spaces using the EM algorithm. Table 1 summarizes the improvements in the word error rates for the different systems. INumerical Algebra Group System Word error rate Baseline (MFCC+~ + ~~) 39.61 % LDA 37.39% Interclass divergence 36.32% Bhattacharyya bound 35.73% Table 1: Word error rates for the different systems. 4 Summary Two methods for performing discriminant feature space projections have been presented. Unlike LDA, they both aim to minimize the probability of misclassification in the projected space by either maximizing the interclass divergence and relating it to the Bayes error or by directly minimizing an upper bound on the classification error. Both methods lead to defining smooth objective functions which have as argument projection matrices and which can be numerically optimized. Experimental results on large vocabulary continuous speech recognition over the telephone show the superiority of the resulting features over their LDA or cepstral counterparts. References [1] H. P. Decell and 1. A. Quirein. An iterative approach to the feature selection problem. Proc. Purdue Univ. Conf. on Machine Processing of Remotely Sensed Data, 3B 13BI2,1972. [2] R. O. Duda and P. B. Hart. Pattern classification and scene analysis. Wiley, New York, 1973. [3] K. Fukunaga. Introduction to statistical pattern recognition. Academic Press, New York,1990. [4] R. Haeb-Umbach and H. Ney. Linear Discriminant Analysis for improved large vocabulary continuous speech recognition. Proceedings of lCASSP'92, volume 1, pages 13-16, 1992. [5] S. Kullback. Information theory and statistics. Wiley, New York, 1968. [6] N. Kumar and A. G. Andreou. Heteroscedastic discriminant analysis and reduced rank HMMs for improved speech recognition. Speech Communcation, 26:283-297, 1998. [7] M. Padmanabhan, G. Saon, S. Basu, 1. Huang and G. Zweig. Recent improvements in voicemail transcription. Proceedings of EUROSPEECH'99, Budapest, Hungary, 1999. [8] G. Saon, M. Padmanabhan, R. Gopinath and S. Chen. Maximum likelihood discriminant feature spaces. Proceedings of lCASSP'2000, Istanbul, Turkey, 2000. [9] S. R. Searle. Matrix algebra useful for statistics. Wiley Series in Probability and Mathematical Statistics, New York, 1982.
2000
50
1,851
Bayesian video shot segmentation Nuno Vasconcelos Andrew Lippman MIT Media Laboratory, 20 Ames St, E15-354, Cambridge, MA 02139, {nuno,lip}@media.mit.edu, http://www.media.mit.edurnuno Abstract Prior knowledge about video structure can be used both as a means to improve the peiformance of content analysis and to extract features that allow semantic classification. We introduce statistical models for two important components of this structure, shot duration and activity, and demonstrate the usefulness of these models by introducing a Bayesian formulation for the shot segmentation problem. The new formulations is shown to extend standard thresholding methods in an adaptive and intuitive way, leading to improved segmentation accuracy. 1 Introduction Given the recent advances on video coding and streaming technology and the pervasiveness of video as a form of communication, there is currently a strong interest in the development of techniques for browsing, categorizing, retrieving and automatically summarizing video. In this context, two tasks are of particular relevance: the decomposition of a video stream into its component units, and the extraction of features for the automatic characterization of these units. Unfortunately, current video characterization techniques rely on image representations based on low-level visual primitives (such as color, texture, and motion) that, while practical and computationally efficient, fail to capture most of the structure that is relevant for the perceptual decoding of the video. In result, it is difficult to design systems that are truly useful for naive users. Significant progress can only be attained by a deeper understanding of the relationship between the message conveyed by the video and the patterns of visual structure that it exhibits. There are various domains where these relationships have been thoroughly studied, albeit not always from a computational standpoint. For example, it is well known by film theorists that the message strongly constrains the stylistic elements of the video [1, 6], which are usually grouped into two major categories: the elements of montage and the elements of mise-en-scene. Montage refers to the temporal structure, namely the aspects of film editing, while, mise-en-scene deals with spatial structure, i.e. the composition of each image, and includes variables such as the type of set in which the scene develops, the placement of the actors, aspects of lighting, focus, camera angles, and so on. Building computational models for these stylistic elements can prove useful in two ways: on one hand it will allow the extraction of semantic features enabling video characterization and classification much closer to that which people use than current descriptors based on texture properties or optical flow. On the other hand, it will provide constraints for the low-level analysis algorithms required to perform tasks such as video segmentation, keyframing, and so on. The first point is illustrated by Figure 1 where we show how a collection of promotional trailers for commercially released feature films populates a 2-D feature space based on the most elementary characterization of montage and mise-en-scene: average shot duration vs. average shot activity t . Despite the coarseness of this characterization, it captures aspects that are important for semantic movie classification: close inspection of the genre assigned to each movie by the motion picture association of America reveals that in this space the movies cluster by genre! 09 06 05 04 + crda + o nV8rw11d , , , + edwood , , , , + steePing i:'p..inlor + scout "''" + + , santa blaritman , + ' clouds " , , + " wal~~~ce~ + '" , Jungle , , 0 4 + puWel Shot acllVlty , , , , , + o + , , , , , , , , , , , , o~lghter o terrnnaJ vengeance o d", + IIlBdness badboys Movie "Circle of Friends" "French Kiss" 'Miami Rhapsody" '''The Santa Clause" "Exit to Eden" "A Walk in the Clouds" 'While you Were Sleeping" "Bad Boys" "Junior" "Crimson Tide" 'The Scout" '''The Walking Dead" "Ed Wood" "'The Jungle Book" "Puppet Master" "A Little Princess" '"Judge Dredd" 'The River Wild" '''Terminal Velocity" '1l1ankman" '1n the Mouth of Madness" "Street Fighter" "Die Hard: With a Vengeance" Legend circle french miami santa eden clouds sleeping badboys junior tide seouL walking edwood jungle puppel princess dredd riverwild terminal blankman madness fighter vengeance Figure 1: Shot activity vs. duration features. The genre of each movie is identified by the symbol used to represent the movie in the plot. In this paper, we concentrate on the second point, i.e. how the structure exhibited by Figure 1 can be exploited to improve the performance of low-level processing tasks such as shot segmentation. Because knowledge about the video structure is a form of prior knowledge, Bayesian procedures provide a natural way to accomplish this goal. We therefore introduce computational models for shot duration and activity and develop a Bayesian framework for segmentation that is shown to significantly outperform current approaches. 2 Modeling shot duration Because shot boundaries can be seen as arrivals over discrete, non-overlapping temporal intervals, a Poisson process seems an appropriate model for shot duration [3]. However, events generated by Poisson processes have inter-arrival times characterized by the exponential density which is a monotonically decreasing function of time. This is clearly not the case for the shot duration, as can be seen from the histograms of Figure 2. In this work, we consider two alternative models, the Erlang and Weibull distributions. 2.1 The Erlang model Letting T be the time since the previous boundary, the Erlang distribution [3] is described by (1) IThe activity features are described in section 3. Figure 2: Shot duration histogram, and maximum likelihood fit obtained with the Erlang (left) and Weibull (right) distributions. It is a generalization of the exponential density, characterized by two parameters: the order r, and the expected inter-arrival time (1/ A) of the underlying Poisson process. When r = 1, the Erlang distribution becomes the exponential distribution. For larger values of r, it characterizes the time between the rth order inter-arrival time of the Poisson process. This leads to an intuitive explanation for the use of the Erlang distribution as a model of shot duration: for a given order r, the shot is modeled as a sequence of r events which are themselves the outcomes of Poisson processes. Such events may reflect properties of the shot content, such as "setting the context" through a wide angle view followed by "zooming in on the details" when r = 2, or "emotional buildup" followed by "action" and "action outcome" when r = 3. Figure 2 presents a shot duration histogram, obtained from the training set to be described in section 5, and its maximum likelihood (ML) Erlang fit. 2.2 The Wei bull model While the Erlang model provides a good fit to the empirical density, it is of limited practical utility due to the constant arrival rate assumption [5] inherent to the underlying Poisson process. Because A is a constant, the expected rate of occurrence of a new shot boundary is the same if 10 seconds or 1 hour have elapsed since the occurrence of the previous one. An alternative models that does not suffer from this problem is the Weibull distribution [5], which generalizes the exponential distribution by considering an expected rate of arrival of new events that is a function of time r aro<- l A(r)=~, and of the parameters a and (3; leading to a probability density of the form aro<-l [(r) 0<] wo<,j3(r) = ~exp /3 . (2) Figure 2 presents the ML Weibull fit to the shot duration histogram. Once again we obtain a good approximation to the empirical density estimate. 3 Modeling shot activity The color histogram distance has been widely used as a measure of (dis)similarity between images for the purposes of object recognition [7], content -based retrieval [4], and temporal video segmentation [2]. A histogram is first computed for each image in the sequence and the distance between successive histograms is used as a measure of local activity. A standard metric for video segmentation [2] is the L l norm of the histogram difference, B V(a, b) = L lai - bil, (3) i=l where a and b are histograms of successive frames, and B the number of histogram bins. Statistical modeling of the histogram distance features requires the identification of the various states through which the video may progress. For simplicity, in this work we restrict ourselves to a video model composed of two states: "regular frames" (S = 0) and "shot transitions" (S = 1). The fundamental principles are however applicable to more complex models. As illustrated by Figure 3, for "regular frames" the distribution is asymmetric about the mean, always positive and concentrated near zero. This suggests that a mixture of Erlang distributions is an appropriate model for this state, a suggestion that is confirmed by the fit to the empirical density obtained with EM, also depicted in the figure. On the other hand, for "shot transitions" the fit obtained with a simple Gaussian model is sufficient to achieve a reasonable approximation to the empirical density. In both cases, a uniform mixture component is introduced to account for the tails of the distributions. Figure 3: Left: Conditional activity histogram for regular frames, and best fit by a mixture with three Erlang and a uniform component. Right: Conditional activity histogram for shot transitions, and best fit by a mixture with a Gaussian and a uniform component. 4 A Bayesian framework for shot segmentation Because shot segmentation is a pre-requisite for virtually any task involving the understanding, parsing, indexing, characterization, or categorization of video, the grouping of video frames into shots has been an active topic of research in the area of multimedia signal processing. Extensive evaluation of various approaches has shown that simple thresholding of histogram distances performs surprisingly well and is difficult to beat [2]. In this work, we consider an alternative formulation that regards the problem as one of statistical inference between two hypothesis: • No: no shot boundary occurs between the two frames under analysis (S = 0), • Jit: a shot boundary occurs between the two frames (S = 1), for which the optimal decision is provided by a likelihood ratio test where Nt is chosen if P(VIS = 1) C = log P(VIS = 0) > 0, (4) and No is chosen otherwise. It is well known that standard thresholding is a particular case of this formulation, in which both conditional densities are assumed to be Gaussians with the same covariance. From the discussion in the previous section, it is clear that this does not hold for real video. One further limitation of the thresholding model is that it does not take into account the fact that the likelihood of a new shot transition is dependent on how much time has elapsed since the previous one. On the other hand, the statistical formulation can easily incorporate the shot duration models developed in section 2. 4.1 Notation Because video is a discrete process, characterized by a given frame rate, shot boundaries are not instantaneous, but last for one frame period. To account for this, states are defined over time intervals, i.e. instead of St = 0 or St = 1, we have St,tH; = 0 or St,t+6 = 1, where t is the start of a time interval, and 8 its duration. We designate the features observed during the interval [t, t + <5] by Vt,tH' To simplify the notation, we reserve t for the temporal instant at which the last shot boundary has occurred and make all temporal indexes relative to this instant. I.e. instead of St+r,t+r+6 we write Sr,r+6, or simply S6 if T = O. Furthermore, we reserve the symbol 8 for the duration of the interval between successive frames (inverse of the frame rate), and use the same notation for a simple frame interval and a vector of frame intervals (the temporal indexes being themselves enough to avoid ambiguity). I.e., while Sr,rH = 0 indicates that no shot boundary is present in the interval [t + T, t + T + 8], SrH = ° indicates that no shot boundary has occurred in any of the frames between t and t + T + 8. Similarly, VrH represents the vector of observations in [t, t + T + 8]. 4.2 Bayesian formulation Given that there is a shot boundary at time t and no boundaries occur in the interval [t, t + T], the posterior probability that the next shot change happens during the interval [t + T, t +T+ 8] is, using Bayes rule, P(Sr,rH = 11Sr = 0, VrH) = 'YP(VrHISr = O,Sr,rH = l)P(Sr,rH = 11Sr = 0), where'Y is a normalizing constant. Similarly, the probability of no change in [t + T, t + T + 8] is P(Sr,rH = OISr = 0, VrH) = 'YP(VrHISrH = O)P(Sr,rH = OISr = 0), and the posterior odds ratio between the two hypothesis is P(Sr,rH = 11Sr = 0, VrH) P(Vr,rHISr,rH = 1) P(Sr,rH = 11Sr = 0) = P(Sr,rH = OISr = 0, VrH) P(Vr,rHISr,rH = 0) P(Sr,rH = OISr = 0) = P(Vr,rHISr,rH = 1) P(Sr,rH = I,Sr = 0\,5) P(Vr,rHISr,rH = 0) P(SrH = 0) where we have assumed that, given Sr,rH, Vr,rH is independent of all other V and S. In this expression, while the first term on the right hand side is the ratio of the conditional likelihoods of activity given the state sequence, the second term is simply the ratio of probabilities that there may (or not) be a shot transition T units of time after the previous one. Hence, the shot duration density becomes a prior for the segmentation process. This is intuitive since knowledge about the shot duration is a form of prior knowledge about the structure of the video that should be used to favor segmentations that are more plausible. Assuming further that V is stationary, defining Llr = [t + T, t + T + <5], considering the probability density function p( T) for the time elapsed until the first scene change after t, and taking logarithms, leads to a log posterior odds ratio Cp08t of the form P(V~TIS~T = 1) J:H p(a)da Cp08t = log P(V IS = 0) + log Joo ()d . (6) ~T ~T r+6 P a a The optimal answer to the question if a shot change occurs or not in [t + T, t + T + 8] is thus to declare that a boundary exists if P(V,dS~T = 1) > 10 Jr':6 P(a)da = 7(T) log P(V~T IS~T = 0) g J:H p(a)da ' (7) and that there is no boundary otherwise. Comparing this with (4), it is clear that the inclusion of the shot duration prior transforms the fixed thresholding approach into an adaptive one, where the threshold depends on how much time has elapsed since the previous shot boundary. 4.2.1 The Erlang model It can be shown that, under the Erlang assumption, (8) and the threshold of (7) becomes "( ) -1 L~-l £i,.x(T + 8) " T og r . Li=lh,.x(T) - £i,.x(T + 8)] (9) Its variation over time is presented in Figure 4. While in the initial segment of the shot, the threshold is large and shot changes are unlikely to be accepted, the threshold decreases as the scene progresses increasing the likelihood that shot boundaries will be declared. . - - - - - - - .,, ~ ... Figure 4: Temporal evolution of the Bayesian threshold for the Erlang (left) and Weibull (center) priors. Right: Total number of errors for all thresholds. Even though, qualitatively, this is behavior that what one would desire, a closer observation of the figure reveals the major limitation of the Erlang prior: its steady-state behavior. Ideally, in addition to decreasing monotonically over time, the threshold should not be lower bounded by a positive value as this may lead to situations in which its steady-state value is high enough to miss several consecutive shot boundaries. This limitation is a consequence of the constant arrival rate assumption discussed in section 2 and can be avoided by relying instead on the Weibull prior. 4.2.2 The Weibull model It can be shown that, under the Wei bull assumption, (10) from which Tw ( T) = - log { exp [( T + 8J: - T CX ] 1 } . (11) As illustrated by Figure 4, unlike the threshold associated with the Erlang prior, Tw(T) tends to -00 when T grows without bound. This guarantees that a new shot boundary will always be found if one waits long enough. In summary, both the Erlang and the Weibull prior lead to adaptive thresholds that are more intuitive than the fixed threshold commonly employed for shot segmentation. 5 Segmentation Results The performance of Bayesian shot segmentation was evaluated on a database containing the promotional trailers of Figure 1. Each trailer consists of 2 to 5 minutes of video and the total number of shots in the database is 1959. In all experiments, performance was evaluated by the leave-one-out method. Ground truth was obtained by manual segmentation of all the trailers. We evaluated the performance of Bayesian models with Erlang, Weibull and Poisson shot duration priors and compared them against the best possible performance achievable with a fixed threshold. For the latter, the optimal threshold was obtained by brute-force, i.e. testing several values and selecting the one that performed best. Error rates for all priors are shown in Figure 4 where it is visible that, while the Poisson prior leads to worse accuracy than the static threshold, both the Erlang and the Weibull priors lead to significant improvements. The Weibull prior achieves the overall best performance decreasing the error rate of the static threshold by 20%. The reasons for the improved performance of Bayesian segmentation are illustrated by Figure 5, which presents the evolution of the thresholding process for a segment from one of the trailers in the database ("blankman"). Two thresholding approaches are depicted: Bayesian with the Weibull prior, and standard fixed thresholding. The adaptive behavior of the Bayesian threshold significantly increases the robustness against spurious peaks of the activity metric originated by events such as very fast motion, explosions, camera flashes, etc. Figure 5: An example of the thresholding process. Top: Bayesian. The likelihood ratio and the Weibull threshold are shown. Bottom: Fixed. Histogram distances and optimal threshold (determined by leave-one-out using the remainder of the database) are presented. Errors are indicated by circles. References [1] D. Bordwell and K. Thompson. Film Art: an Introduction. McGraw-Hill, 1986. [2] J. Boreczky and L. Rowe. Comparison of Video Shot Boundary Detection Techniques. In Proc. SPIE Con! on Visual Communication and Image Processing, 1996. [3] A. Drake. Fundamentals of Applied Probability Theory. McGraw-Hill, 1987. [4] W. Niblack et al. The QBIC project: Querying images by content using color, texture, and shape. In Storage and Retrievalfor Image and Video Databases, pages 173- 181, SPIE, Feb. 1993, San Jose, California. [5] R. Hogg and E. Tanis. Probability and Statistical Inference. Macmillan, 1993. [6] K. Reisz and G. Millar. The Technique of Film Editing. Focal Press, 1968. [7] M. Swain and D. Ballard. Color Indexing. International Journal of Computer Vision, Vol. 7(1):11- 32, 1991.
2000
51
1,852
Kernel expansions with unlabeled examples Martin Szummer MIT AI Lab & CBCL Cambridge, MA szummer@ai.mit.edu Abstract Tommi Jaakkola MIT AI Lab Cambridge, MA tommi@ai.mit.edu Modern classification applications necessitate supplementing the few available labeled examples with unlabeled examples to improve classification performance. We present a new tractable algorithm for exploiting unlabeled examples in discriminative classification. This is achieved essentially by expanding the input vectors into longer feature vectors via both labeled and unlabeled examples. The resulting classification method can be interpreted as a discriminative kernel density estimate and is readily trained via the EM algorithm, which in this case is both discriminative and achieves the optimal solution. We provide, in addition, a purely discriminative formulation of the estimation problem by appealing to the maximum entropy framework. We demonstrate that the proposed approach requires very few labeled examples for high classification accuracy. 1 Introduction In many modern classification problems such as text categorization, very few labeled examples are available but a large number of unlabeled examples can be readily acquired. Various methods have recently been proposed to take advantage of unlabeled examples to improve classification performance. Such methods include the EM algorithm with naive Bayes models for text classification [1], the co-training framework [2], transduction [3, 4], and maximum entropy discrimination [5]. These approaches are divided primarily on the basis of whether they employ generative modeling or are motivated by robust classification. Unfortunately, the computational effort scales exponentially with the number of unlabeled examples for exact solutions in discriminative approaches such as transduction [3, 5]. Various approximations are available [4, 5] but their effect remains unclear. In this paper, we formulate a complementary discriminative approach to exploiting unlabeled examples, effectively by using them to expand the representation of examples. This approach has several advantages including the ability to represent the true Bayes optimal decision boundary and making explicit use of the density over the examples. It is also computationally feasible as stated. The paper is organized as follows. We start by discussing the kernel density estimate and providing a smoothness condition, assuming labeled data only. We subsequently introduce unlabeled data, define the expansion and formulate the EM algorithm for discriminative training. In addition, we provide a purely discriminative version of the parameter estimation problem and formalize it as a maximum entropy discrimination problem. We then demonstrate experimentally that various concerns about the approach are not warranted. 2 Kernel density estimation and classification We start by assuming a large number of labeled examples D = {(Xl, ill)" .. , (XN, fiN)}' where ih E {-I, I} and Xi E Rf A joint kernel density estimate can be written as 1 N P(x,y) = N L t5(Y,ih)K(x,Xi) i=l (1) where J K(x, xi)dl-£(x) = 1 for each i. With an appropriately chosen kernel K, a function of N, P(x, y) will be consistent in the sense of converging to the joint density as N -+ 00. Given a fixed number of examples, the kernel functions K(x, Xi) may be viewed as conditional probabilities P(xli), where i indexes the observed points. For the purposes of this paper, we assume a Gaussian form K(x, Xi) = N(x; Xi> rr2 I). The labels ih assigned to the sampled points Xi may themselves be noisy and we incorporate P(yli), a locationspecific probability of labels. The resulting joint density model is N P(x, y) = ~?: P(yli) P(xli) t=l Interpreting liN as a prior probability of the index variable i = 1, ... , N, the resulting model conforms to the graph depicted above. This is reminiscent of the aspect model for clustering of dyadic data [6]. There are two main differences. First, the number of aspects here equals the number of examples and the model is not suitable for clustering. Second, we do not search for the probabilities P(xli) (kernels), instead they are associated with each observed example and are merely adjusted in terms of scale (kernel width). This restriction yields a significant computational advantage in classification, which is the objective in this paper. The posterior probability of the label y given an example X is given by P(ylx) = L:i P(yli)P(ilx), where P(ilx) ex: P(xli) I P(x) as P(i) is assumed to be uniform. The quality of the posterior probability depends both on how accurately P(yli) are known as well as on the properties of the membership probabilities P(ilx) (always known) that must be relatively smooth. Here we provide a simple condition on the membership probabilities P(ilx) so that any noise in the sampled labels for the available examples would not preclude accurate decisions. In other words, we wish to ensure that the conditional probabilities P(ylx) can be evaluated accurately on the basis of the sampled estimate in Eq. (1). Removing the label noise provides an alternative way of setting the width parameter rr of the Gaussian kernels. The simple lemma below, obtained via standard large deviation methods, ties the appropriate choice of the kernel width rr to the squared norm of the membership probabilities P(iIXj). Lemma I Let IN = {I, ... , N}. Given any t5 > 0, E > 0, and any collection of distributionsPilk ~ 0, L:iEINPilk = 1 fork E IN, suchthatllp'lkl12 ~ E/V210g(2Nlt5),Vk E IN, and independent samples ih E {-I, I} from some P(yli), i E IN, then P(3k E IN : I L::'1 ihpilk - L::'1 WiPilkl > E) ~ t5 where Wi = P(y = Iii) - P(y = -Iii) and the probability is taken over the independent samples. The lemma applies to our case by setting Pilk = P(ilxk), {;iii} represents the sampled labels for the examples, and by noting that the sign of L wiP(ilx) is the MAP decision rule from our model, P(y = 11x) - P(y = -llx). The lemma states that as long as the membership probabilities have appropriately bounded squared norm, the noise in the labeling is inconsequential for the classification decisions. Note, for example, that a distribution Pilk = l/N has IIp.lkl12 = l/VN implying that the conditions are achievable for large N. The squared norm of P(ilx) is directly controlled by the kernel width a 2 and thus the lemma ties the kernel width with the accuracy of estimating the conditional probabilities P(ylx). Algorithms for adjusting the kernel width(s) on the basis of this will be presented in a longer version of the paper. 3 The expansion and EM estimation A useful way to view the resulting kernel density estimate is that each example x is represented by a vector of membership probabilities P(ilx), i = 1, ... , N. Such mixture distance representations have been used extensively; it can also be viewed as a Fisher score vector computed with respect to adjustable weighting P(i). The examples in this new representation are classified by associating P(yli) with each component and computing P(ylx) = Li P(yli)P(ilx). An alternative approach to exploiting kernel density estimates in classification is given by [7]. We now assume that we have labels for only a few examples, and our training data is {(X1dh), ... , (XL, ih), XL+1,· .. ,XN}. In this case, we may continue to use the model defined above and estimate the free parameters, P(y Ii), i = 1, ... , N, from the few labeled examples. In other words, we can maximize the conditional log-likelihood L L N Z)og P(Yllxl) = 2: log 2: P(jh li)P(ilxl) (2) 1=1 1=1 i=l where the first summation is only over the labeled examples and L « N . Since P(ilxl) are fixed, this objective function is jointly concave in the free parameters and lends itself to a unique maximum value. The concavity also guarantees that this optimization is easily performed via the EM algorithm [8]. Let Pill be the soft assignment for component i given (x!, iM, i.e., Pill = P(ilxl, iii) ex: P(ydi)P(ilxl). The EM algorithm iterates between the E-step, where Pill are recomputed from the current estimates of P(yli), and the M-step where we update P(yli) ~ Ll:ih=yPild L1Pill. This procedure may have to be adjusted in cases where the overall frequency of different labels in the (labeled) training set deviates significantly from uniform. A simple rescaling P(yli) ~ P(yli)/ Ly by the frequencies Ly and renormalization after each M-step would probably suffice. The runtime of this algorithm is O(L N). The discriminative formulation suggests that EM will provide reasonable parameter estimates P(yli) for classification purposes. The quality of the solution, as well as the potential for overfitting, is contingent on the smoothness of the kernels or, equivalently, smoothness of the membership probabilities P(ilx). Note, however, that whether or not P(yli) will converge to the extreme values 0 or 1 is not an indication of overfitting. Actual classification decisions for unlabeled examples Xi (included in the expansion) need to be made on the basis of P(yIXi) and not on the basis of P(yli), which function as parameters. 4 Discriminative estimation An alternative discriminative formulation is also possible, one that is more sensitive to the decision boundary rather than probability values associated with the labels. To this end, consider the conditional probability P(Ylx) = Li P(Yli)P(ilx). The decisions are made on the basis of the sign of the discriminant function N f(x) = P(y = llx) - P(y = -llx) = L wiP(ilx) (3) i=l where Wi = P(y = Iii) - P(y = -Iii). This is similar to a linear classifier and there are many ways of estimating the weights Wi discriminatively. The weights should remain bounded, however, i.e., Wi E [-1,1], so long as we wish to maintain the kernel density interpretation. Estimation algorithms with Euclidean norm regularization such as SVMs would not be appropriate in this sense. Instead, we employ the maximum entropy discrimination (MED) framework [5] and rely on the relation Wi = E{Yi} = L yi=±1 YiP(y) to estimate the distribution P(y) over all the labels Y = [YI,'" ,YN]. Here Yi is a parameter associated with the ith example and should be distinguished from any observed labels. We can show that in this case the maximum entropy solution factors across the examples P(YI, ... , YN) = TIi Pi (Yi) and we can formulate the estimation problem directly in terms of the marginals Pi (Yi). The maximum entropy formalism encodes the principle that label assignments Pi (Yi) for the examples should remain uninformative to the extent possible given the classification objective. More formally, given a set of L labeled examples (Xl, IiI), ... , (XL, ih), we maximize L~l H(Yi) - eLl el subject to the classification constraints (4) where H (Yi) is the entropy of Yi relative to the marginal Pi (Yi). Here'Y specifies the target separation ('Y E [0,1]) and the slack variables el 2: a permit deviations from the target to ensure that a solution always exists. The solution is not very sensitive to these parameters, and'Y = 0.1 and C = 40 worked well for many problems. The advantage of this formulation is that effort is spent only on those training examples whose classification is uncertain. Examples already classified correctly with a margin larger than 'Yare effectively ignored. The optimization problem and algorithms are explained in the appendix. 5 Discussion of the expanded representation The kernel expansion enables us to represent the Bayes optimal decision boundary provided that the kernel density estimate is sufficiently accurate. With this representation, the EM and MED algorithms actually estimate decision boundaries that are sensitive to the density P(x). For example, labeled points in high-density regions will influence the boundary more than in low-density regions. The boundary will partly follow the density, but unlike in unsupervised methods, will adhere strongly to the labeled points. Moreover, our estimation techniques limit the effect of outliers, as all points have a bounded weight Wi = [-1,1] (spurious unlabeled points do not adversely affect the boundary). As we impose smoothness constraints on the membership probabilities P(ilx), we also guarantee that the capacity of the resulting classifier need not increase with the number of unlabeled examples (in the fat shattering sense). Also, in the context of the maximum entropy formulation, if a point is not helpful for the classification constraints, then entropy is maximized for Pi(y = ±l) = 0.5, implying Wi = 0, and the point has no effect on the boundary. If we dispense with the conditional probability interpretation of the kernels K, we are free to choose them from a more general class of functions. For example, the kernels no longer have to integrate to 1. An expansion of x in terms of these kernels can still be meaningful; as a special case, when linear kernels are chosen, the expansion reduces to weighting distances between points by the covariance of the data. Distinctions along high variance directions then become easier to make, which is helpful when between-class scatter is greater than within-class scatter. Thus, even though the probabilistic interpretation is missing, a simple preprocessing step can still help, e.g., support vector machines to take advantage of unlabeled data: we can expand the inputs x in terms of kernels G from labeled and unlabeled points as in ¢(x) = ~[G(x, Xl)' ... ,G(x, XN)], where Z optionally normalizes the feature vector. 6 Results We first address the potential concern that the expanded representation may involve too many degrees of freedom and result in poor generalization. Figure la) demonstrates that this is not the case and, instead, the test classification error approaches the limiting asymptotic rate exponentially fast. The problem considered was a DNA splice site classification problem with 500 examples for which d = 100. Varying sizes of random subsets were labeled and all the examples were used in the expansion as unlabeled examples. The error rate was computed on the basis of the remaining 500 - L examples without labels, where L denotes the number of labeled examples. The results in the figure were averaged across 20 independent runs. The exponential rate of convergence towards the limiting rate is evidenced by the linear trend in the semilog figure la). The mean test errors shown in figure Ib) indicate that the purely discriminative training (MED) can contribute substantially to the accuracy. The kernel width in these experiments was simply fixed to the median distance to the 5th nearest neighbor from the opposite class. Results from other methods of choosing the kernel width (the squared norm, adaptive) will be discussed in the longer version of the paper. Another concern is perhaps that the formulation is valid only in cases where we have a large number of unlabeled examples. In principle, the method could deteriorate rapidly after the kernel density estimate no longer can be assumed to give reasonable estimates. Figure 2a) illustrates that this is not a valid interpretation. The problem here is to classify DNA micro array experiments on the basis of the leukemia types that the tissues used in the array experiments corresponded to. Each input vector for the classifier consists of the expression levels of over 7000 genes that were included as probes in the arrays. The number of examples available was 38 for training and 34 for testing. We included all examples as unlabeled points in the expansion and randomly selected subsets of labeled training examples, and measured the performance only on the test examples (which were of slightly different type and hence more appropriate for assessing generalization). Figure 2 shows rapid convergence for EM and the discriminative MED formulation. The "asymptotic" level here corresponds to about one classification error among the 34 test examples. The results were averaged over 20 independent runs. 7 Conclusion We have provided a complementary framework for exploiting unlabeled examples in discriminative classification problems. The framework involves a combination of the ideas of kernel density estimation and representational expansion of input vectors. A simple EM o 35 ,----~-~-~-~-~_____, a) o 050L-~-----," 10--1C=5 -~2~ 0 -~ 25,--------,J30 b) labeled examples Figure 1: A semilog plot of the test error rate for the EM formulation less the asymptotic rate as a function of labeled examples. The linear trend in the figure implies that the error rate approaches the asymptotic error exponentially fast. b) The mean test errors for EM, MED and SVM as a function of the number of labeled examples. SVM does not use unlabeled examples. o 35,------~-,------~--,----______, 03 025 g • 02 ~ ma 15 E 01 005 %~-~-~17 0 -~ 1 5-~27 0 -~25 ' number of labeled examples Figure 2: The mean test errors for the leukemia classification problem as a function of the number of randomly chosen labeled examples. Results are given for both EM (lower line) and MED (upper line) formulations. algorithm is sufficient for finding globally optimal parameter estimates but we have shown that a purely discriminative formulation can yield substantially better results within the framework. Possible extensions include using the kernel expansions with transductive algorithms that enforce margin constraints also for the unlabeled examples [5]. Such combination can be particularly helpful in terms of capturing the lower dimensional structure of the data. Other extensions include analysis of the framework similarly to [9]. Acknowledgments The authors gratefully acknowledge support from NTT and NSF. Szummer would also like to thank Thomas Minka for many helpful discussions and insights. References [1] Nigam K., McCallum A., Thrun S., and Mitchell T. (2000) Text classification from labeled and unlabeled examples. Machine Learning 39 (2):103-134. [2] Blum A., Mitchell T. (1998) Combining Labeled and Unlabeled Data with CoTraining. In Proc. 11th Annual Con! Computational Learning Theory, pp. 92-100. [3] Vapnik V. (1998) Statistical learning theory. John Wiley & Sons. [4] Joachims, T. (1999) Transductive inference for text classification using support vector machines. International Conference on Machine Learning. [5] Jaakkola T., Meila M., and Jebara T. (1999) Maximum entropy discrimination. In Advances in Neural Information Processing Systems 12. [6] Hofmann T., Puzicha 1. (1998) Unsupervised Learning from Dyadic Data. International Computer Science Institute, TR-98-042. [7] Tong S., Koller D. (2000) Restricted Bayes Optimal Classifiers. Proceedings AAAI. [8] Miller D., Uyar T. (1996) A Mixture of Experts Classifer with Learning Based on Both Labelled and Unlabelled Data. In Advances in Neural Information Processing Systems 9, pp. 571-577. [9] Castelli v., Cover T. (1996) The relative value of labeled and unlabeled samples in pattern recognition with an unknown mixing parameter. IEEE Transactions on information theory 42 (6): 2102-2117. A Maximum entropy solution The unique solution to the maximum entropy estimation problem is found via introducing Lagrange multipliers {AI} for the classification constraints. The multipliers satisfy Al E [0, el, where the lower bound comes from the inequality constraints and the upper bound from the linear margin penalties being minimized. To represent the solution and find the optimal setting of Al we must evaluate the partition function N Z(A) = e- ~f An L II e~f iizA1YiP(ilxd = (5) N = e- ~f An II ( e~f Y1A1P(ilxd + e- ~f Y1A1P(ilxd ) (6) i=l that normalizes the maximum entropy distribution. Here Y denote the observed labels. Minimizing the jointly convex log-partition function log Z(A) with respect to the Lagrange multipliers leads to the optimal setting {Ai}. This optimization is readily done via an axis parallel line search (e.g. the bisection method). The required gradients are given by 01 Z(A) N ( L ) O~Ak = -')' + ~ tanh tt Y1Aj P(ilxl) YkP(ilxk) = (7) N = -')'+Yk LEp;{YdP(ilxk) (8) i=l (this is essentially the classification constraint). The expectation is taken with respect to the maximum entropy distribution P* (Yl , ... , Y N) = Pi (Yl) .. . PN (y N) where the components are Pt(Yi) ex exp{L:1Y1A1YiP(ilx)}. The label averages wi = Ep.{Yd = L:Yi Yi Pt (Yi) are needed for the decision rule as well as in the optimization. We can identify these from above wi = tanh(L:l Y1Aj P(ilxl)) and they are readily evaluated. Finding the solution involves O(L2 N) operations. Often the numbers of positive and negative training labels are imbalanced. The MED formulation (analogously to SVMs) can be adjusted by defining the margin penalties as e+ L:l:Y1=1 6 + e- L:l:Y1=-1 ~l' where, for example, L+e+ = L-e- that equalizes the mean penalties. The coefficients e+ and e- can also be modified adaptively during the estimation process to balance the rate of misclassification errors across the two classes.
2000
52
1,853
Universality and individuality in a neural code Elad Schneidman,1,2 Naama Brenner,3 Naftali Tishby,1,3 Rob R. de Ruyter van Steveninck,3 William Bialek3 ISchool of Computer Science and Engineering, Center for Neural Computation and 2Department of Neurobiology, Hebrew University, Jerusalem 91904, Israel 3NEC Research Institute, 4 Independence Way, Princeton, New Jersey 08540, USA { elads, tishby} @cs.huji. ac. il, {bialek, ruyter, naama} @research. nj. nec. com Abstract The problem of neural coding is to understand how sequences of action potentials (spikes) are related to sensory stimuli, motor outputs, or (ultimately) thoughts and intentions. One clear question is whether the same coding rules are used by different neurons, or by corresponding neurons in different individuals. We present a quantitative formulation of this problem using ideas from information theory, and apply this approach to the analysis of experiments in the fly visual system. We find significant individual differences in the structure of the code, particularly in the way that temporal patterns of spikes are used to convey information beyond that available from variations in spike rate. On the other hand, all the flies in our ensemble exhibit a high coding efficiency, so that every spike carries the same amount of information in all the individuals. Thus the neural code has a quantifiable mixture of individuality and universality. 1 Introduction When two people look at the same scene, do they see the same things? This basic question in the theory of knowledge seems to be beyond the scope of experimental investigation. An accessible version of this question is whether different observers of the same sense data have the same neural representation of these data: how much of the neural code is universal, and how much is individual? Differences in the neural codes of different individuals may arise from various sources: First, different individuals may use different 'vocabularies' of coding symbols. Second, they may use the same symbols to encode different stimulus features. Third, they may have different latencies, so they 'say' the same things at slightly different times. Finally, perhaps the most interesting possibility is that different individuals might encode different features of the stimulus, so that they 'talk about different things'. If we are to compare neural codes we must give a quantitative definition of similarity or divergence among neural responses. We shall use ideas from information theory [1, 2] to quantify the the notions of distinguishability, functional equivalence and content in the neural code. This approach does not require a metric either on the space of stimuli or on the space of neural responses (but see [3]); all notions of similarity emerge from the statistical structure of the neural responses. We apply these methods to analyze experiments on an identified motion sensitive neuron in the fly's visual system, the cell HI [4]. Many invertebrate nervous systems have cells that can be named and numbered [5]; in many cases, including the motion sensitive cells in the fly's lobula plate, a small number of neurons is involved in representing a similarly identifiable portion of the sensory world. It might seem that in these cases the question of whether different individuals share the same neural representation of the visual world would have a trivial answer. Far from trivial, we shall see that the neural code even for identified neurons in flies has components which are common among flies and significant components which are individual to each fly. 2 Distinguishing flies according to their spike patterns Nine different flies are shown precisely the same movie, which is repeated many times for each fly (Figure Ia). As we show the movie we record the action potentials from the HI neuron. 1 The details of the stimulus movie should not have a qualitative impact on the results, provided that the movie is sufficiently long and rich to drive the system through a reasonable and natural range of responses. Figure Ib shows a portion of the responses of the different flies to the visual stimulus - the qualitative features of the neural response on long time scales ('" 100 ms) are common to almost all the flies, and some aspects of the response are reproducible on a (few) millisecond time scale across multiple presentations of the movie to each fly. Nonetheless the responses are not identical in the different flies, nor are they perfectly reproduced from trial to trial in the same fly. To analyze similarities and differences among the neural codes, we begin by discretizing the neural response into time bins of size I:l.t = 2 ms. At this resolution there are almost never two spikes in a single bin, so we can think of the neural response as a binary string, as in Fig. Ic-d. We examine the response in blocks or windows of time having length T, so that an individual neural response becomes a binary 'word' W with T / I:l.t 'letters'. Clearly, any fixed choice of T and I:l.t is arbitrary, and so we explore a range of these parameters. Figure If shows that different flies 'speak' with similar but distinct vocabularies. We quantify the divergence among vocabularies by asking how much information the observation of a single word W provides about the identity of its source, that is about the identity of the fly which generates this word: N . [ pi(W) ] J(W -+ identity; T) = 8 Ii. ~ P'(W) log2 pens(w) bits, (1) lThe stimulus presented to the flies is a rigidly moving pattern of vertical bars, randomly dark or bright, with average intensity I ~ 100mW/(m2 • sr). The pattern position was defined by a pseudorandom sequence, simulating a diffusive motion or random walk. Recordings were made from the H1 neuron of immobilized Hies, using standard methods. We draw attention to three points relevant for the present analysis: (1) The Hies are freshly caught female Calliphora, so that our 'ensemble of Hies' approaches a natural ensemble. (2) In each Hy we identify the H1 cell as the unique spiking neuron in the lobula plate that has a combination of wide field sensitivity, inward directional selectivity for horizontal motion, and contralateral projection. (3) Recordings are rejected only if raw electrode signals are excessively noisy or unstable. a Stimulus o~200~ '0 0 o a; >_200 L-----L-----~----~----~----~ b Spike trains Fly 1 ~~~--~~~~",; Fly 2 ~~~--~H*~~--+4~~-------­ Fly3 ~~U-__ ~~ __ ~ __ ~~~ ______ __ Fly4 ~---------*--~--~~~-------­ Fly5 ~---mrr-----,; Fly 6 ~---*l¥--­ Fly7 ~---'"""'---------' Fly8 ~--.~--~~~~--~ Fly9 3L-~=3~,1~~=3~ ,2-=~-3~ ,3 Time (5) 3.4 3,5 c e Fly 1 Word distribution @ t 001000 : .. pFly 1 (Wlt=3306) . .':" . " _~~ ____ ~ pFIY'(Wlt=33061 d f Fly 6 Total word distribution 3306 3318 20 40 60 Time (ms) binary word value Figure 1: Different flies' spike trains and word statistics. (a) All flies view the same random vertical bar pattern moving across their visual field with a time dependent velocity, part of which is shown. In the experiment, a 40 sec waveform is presented repeatedly, 90 times. (b) A set of 45 response traces to the part of the stimulus shown in (a) from each of the 9 flies. The traces are taken from the segment of the experiment where the transient responses have decayed. (c) Example of construction of the local word distributions. Zooming in on a segment of the repeated responses of fly 1 to the visual stimuli, the fly's spike trains are divided into contiguous 2 ms bins, and the spikes in each of the bins are counted. For example, we get the 6 letter words that the fly used at time 3306 ms into the input trace. (d) Similar as (c) for fly 6. (e) The distributions of words that flies 1 and 6 used at time t = 3306 ms from the beginning of the stimulus. The time dependent distributions, pI(Wlt = 3306 ms) and p6(Wlt = 3306 ms} are presented as a function of the binary value of the actual 'word', e.g., binary word value '17' stands for the word '010001' . (f) Collecting the words that each of the flies used through all of the visual stimulus presentations, we get the total word distributions for flies 1 and 6, pI (W) and P6(W} . where P;. = 1/ N is the a priori probability that we are recording from fly i, pi(W) is the probability that fly i will generate (at some time) the word W in response to the stimulus movie, and pens(w) is the probability that any fly in the whole ensemble of flies would generate this word, N pens(W) = L p;'pi(W). (2) i=l The measure J(W -+ identity;T) has been discussed by Lin [11] as the 'JensenShannon divergence' DJS among the distributions pi(W).2 2Unlike the Kullback- Leibler divergence [2] (the 'standard' choice for measuring dissimilarity among distributions), the Jensen- Shannon divergence is symmetric, and bounded (see also [12]). Moreover, DJS can be used to bound other measures of similarity, such as the optimal or Bayesian probability of identifying correctly the origin of a sample. We find that information about identity is accumulating at more or less constant rate well before the under sampling limits of the experiment are reached (Fig. 2a). Thus I(W -+ identity; T) ~ R(W -+ identity) . T; R(W -+ identity) ~ 5 bits/s, with a very weak dependence on the time resolution .t!.t. Since the mean spike rate can be measured by counting the number of Is in each word W, this information includes the differences in firing rate among the different flies. Even if flies use very similar vocabularies, they may differ substantially in the way that they associate words with particular stimulus features. Since we present the stimulus repeatedly to each fly, we can specify the stimulus precisely by noting the time relative to the beginning of the stimulus. We can therefore consider the word W that the ith fly will generate at time t. This word is drawn from the distribution pi(Wlt) which we can sample, as in Fig. lc-e, by looking across multiple presentations of the same stimulus movie. In parallel with the discussion above, we can measure the information that the word W observed at known t gives us about the identity of the fly, .. N i [ pi(Wlt) ] I(W -+ IdentIty It; T) = ~ 11 ~ p (Wit) log2 pens(Wlt) , where the distribution of words used at time t by the whole ensemble of flies is N pens(Wlt) = L l1Pi(Wlt). i=l The natural quantity is an average over all times t, I( {W, t} -+ identity; T) = (I(W -+ identity It; T)t bits, where ( .. ·)t denotes an average over t. (3) (4) (5) Figure 2b shows a plot of I ( {W, t} -+ identity; T) /T as a function of the observation time window of size T. Observing both the spike train and the stimulus together provides 32 ± 1 bits/s about the identity of the fly. This is more than six times as much information as we can gain by observing the spike train alone, and corresponds to gaining one bit in ""' 30 ms; correspondingly, a typical pair of flies in our ensemble can be distinguished reliably in ""' 30 ms. This is the time scale on which flies actually use their estimates of visual motion to guide their flight during chasing behavior [6], so that the neural codes of different individuals are distinguishable on the time scales relevant to behavior. 3 Different flies encode different amounts of information about the same stimulus Having seen that we can distinguish reliably among individual flies using relatively short samples of the neural response, we turn to ask whether these substantial differences among codes have an impact on the ability of these cells to convey information about the visual stimulus. As discussed in Refs. [7, 8], the information which the neural response of the ith fly provides about the stimulus, Ii(W -+ s(t); T), is determined by the same probability distributions defined above: 3 i(W -+ s(t);T) = (~Pi(Wlt)IOg2 [~~~~)] )t (6) 3 Again we note that our estimate of the information rate itself is independent of any metric in the space of stimuli, nor does it depend on assumptions about which stimulus features are most important in the code. a b 0 . 05,---~-~-~--~-~----r---'] ___ --~~~ FI~ Y 6 __ YS mixture ¥O.04 '" E ~ eO.03 f " "0 Fly 1 vs mixture :;::0.02 " .8 '" ~001 Fly 6 vs mixture ---=========== Fly 1 vs mixture %L--~5-~1~ 0 -~ 15~~2~ 0 ~-2~5~~3~ 0 5 10 15 20 25 30 Word length (msec) Word length (msec) Figure 2: Distinguishing one fly from others based on spike trains. (a) The average rate of information gained about the identity of a fly from its word distribution, as a function of the word size used (middle curve). The information rate is saturated even before we reach the maximal word length used. Also shown is the average rate of information that the word distribution of fly 1 (and 6) gives about its identity, compared with the word distribution mixture of all of the flies. The connecting line is used for clarification only. (b) Similar to (a), we compute the average amount of information that the distribution of words the fly used at a specific point in time gives about its identity. A veraging over all times, we show the amount of information gained about the identity of fly 1 (and 6) based on its time dependent word distributions, and the average over the 9 flies (middle curve). Error bars were calculated as in (8). A "baseline calculation" , where we subdivided the spike trains of one fly into artificial new individuals, and compared their spike trains, gave significantly smaller values (not shown). Figure 3a shows that the flies in our ensemble span a range of information rates from ~ 50 to ~ 150 bits/so This threefold range of information rates is correlated with the range of spike rates, so that each of the cells transmits nearly a constant amount of information per spike, 2.39 ± 0.24 bits/spike. This universal efficiency (10% variance over the population, despite three fold variations in total spike rate), reflects that cells with higher firing rates are not generating extra spikes at random, but rather each extra spike is equally informative about the stimulus. Although information rates are correlated with spike rates, this does not mean that information is carried by a "rate code" alone. To address the rate/timing distinction we compare the total information rate in Fig. 3a, which includes the detailed structure of the spike train, with the information carried in the temporal modulations of the spike rate. As explained in Ref. [10], the information carried by the arrival time of a single spike can be written as an integral over the time variations of the spike rate, and multiplying by the number of spikes gives us the expected information rate if spikes contribute independently; information rates larger than this represent synergy among spikes, or extra information in the temporal patterns of spike. For all the flies in our ensemble, the total rate at which the spike train carries information is substantially larger than the 'single spike' information- 2.39 vs. 1.64 bits/spike, on average. This extra information is carried in the temporal patterns of spikes (Fig. 3b). 4 A universal codebook? Even though flies differ in the structures of their neural responses, distinguishable responses could be functionally equivalent. Thus it might be that all flies could be 150 "Su o Q) en -g]j 100 §8. ~Ul ctS ::::l E..... ::::l 50 .E .~ c ..... _ en a • ! I ! I I 20 40 60 Firing rate (spikes/sec) 100 E~ o~ 80 ..... -Q) c ..... 0.2 60 "oW () ctS ::::l E~ ..... en 040 -ctS c ..... .- 0 ctSa. 20 ~E x Q) UJ ..... b I I I f 20 40 60 Firing rate (spikes/sec) Figure 3: The information about the stimulus that a fly's spike train carries is correlated with firing rate, and yet a significant part is in the temporal structure. (a) The rate at the HI spike train provides information about the visual stimulus is shown as a function of the average spike rate, with each fly providing a single data point The linear fit of the data points for the 9 flies corresponds to a universal rate of 2.39 ± 0.24 bits/spike, as noted in the text. (b) The extra amount of information that the temporal structure of the spike train of each of the Hies carry about the stimulus, as a function of the average firing rate of the fly (see [10]). The average amount of additional information that is carried by the temporal structure of the spike trains, over the population is 45 ± 17%. Error bars were calculated as in [8] endowed (genetically?) with a universal or consensus codebook that allows each individual to make sense of her own spike trains, despite the differences from her conspecifics. Thus we want to ask how much information we lose if the identity of the flies is hidden from us, or equivalently how much each fly can gain by knowing its own individual code. If we observe the response of a neuron but don't know the identity of the individual generating this response, then we are observing responses drawn from the ensemble distributions defined above, pens(WJt) and pens(w). The information that words provide about the visual stimulus then is IffiiX(W ~ s(t)j T) = ( ~ pens(WJt) 10g2 [~::~~~)] ) t bits. (7) On the other hand, if we know the identity of the fly to be i, we gain the information that its spike train conveys about the stimulus, Ji(W ~ s(t) j T), Eq. (6). The average information loss is then N I~:~(W ~ s(t)j T) = L lUi(W ~ s(t)j T) - IffiiX(W ~ s(t)j T). (8) i= l After some algebra it can be shown that this average information loss is related to the information that the neural responses give about the identity of the individuals, as defined above: I( {W, t} ~ identityj T) -I(W ~ identityj T). (9) The result is that, on average, not knowing the identity of the fly limits us to extracting only 64 bits/s of information about the visual stimulus. This should be compared with the average information rate of 92.3 bits/s in our ensemble of flies: knowing her own identity allows the average fly to extract 44% more information from Hl. Further analysis shows that each individual fly gains approximately the same relative amount of information from knowing its personal codebook. 5 Discussion We have found that the flies use similar yet distinct set of 'words' to encode information about the stimulus. The main source of this difference is not in the total set of words (or spike rates) but rather in how (i.e. when) these words are used to encode the stimulus; taking this into account the flies are discriminable on time scales of relevance to behavior. Using their different codes, the flies' HI spike trains convey very different amounts of information from the same visual inputs. Nonetheless, all the flies achieve a high and constant efficiency in their encoding of this information, and the temporal structure of their spike trains adds nearly 50% more information than that carried by the rate. So how much is universal and how much is individual? We find that each individual fly would lose'" 30% of the visual information carried by this neuron if it 'knew' only the codebook appropriate to the whole ensemble of flies. We leave the judgment of whether this is high individuality or not to the reader, but recall that this is the individuality in an identified neuron. Hence, we should expect that all neural circuits- both vertebrate and invertebrate-express a degree of universality and a degree of individuality. We hope that the methods introduced here will help to explore this issue of individuality more generally. This research was supported by a grant from the Ministry of Science, Israel. References [1] Shannon, C. E. A mathematical theory of communication, Bell Sys. Tech. J. 27, 379- 423, 623- 656 (1948). [2] Cover, T. & Thomas J. Elements of information theory (Wiley, 1991). [3] Victor, J. D. & Purpura, K. Nature and precision of temporal coding in visual cortex: a metric- space analysis, J. Neurophysiol. 76, 1310- 1326 (1996). [4] Hausen, K. The lobular complex of the fly, in Photoreception and vision in invertebrates (ed Ali, M.) pp. 523-559 (Plenum, 1984). [5] Bullock, T. Structure and Function in the Nervous Systems of Invertebrates (W. H. Freeman, San Francisco, 1965). [6] Land, M. F. & Collett, T. S. Chasing behavior of houseflies (Fannia canicularis). A description and analysis, J. Compo Physiol. 89, 331- 357 (1974) . [7] de Ruyter van Steveninck, R. R., Lewen, G. D., Strong, S. P., Koberie, R. & W. Bialek, Reproducibility and variability in neural spike trains, Science 275, 1805- 1808, (1997). [8] Strong, S. P., Koberie, R., de Ruyter van Steveninck, R. & Bialek, W. Entropy and information in neural spike trains, Phys. Rev. Lett. 80, 197- 200 (1998). [9] Rieke, F., Wariand, D., de Ruyter van Steveninck, R. & Bialek, W. Spikes: Exploring the Neural Code (MIT Press, 1997). [10] Brenner, N., Strong, S. P., Koberie, R., de Ruyter van Steveninck, R. & Bialek, W. Synergy in a neural code, Neural Compo 12, 1531- 52 (2000). [11] Lin, J., Divergence measures based on the Shannon entropy, IEEE Trans. Inf. Theory, 37, 145- 151 (1991) . [12] El-Yaniv, R., Fine, S. & Tishby, N. Agnostic classification of Markovian sequences, NIPS 10 pp. 465-471 (MIT Press, 1997).
2000
53
1,854
Generalized Belief Propagation Jonathan S. Yedidia MERL 201 Broadway Cambridge, MA 02139 Phone: 617-621-7544 yedidia@merl.com William T. Freeman MERL 201 Broadway Cambridge, MA 02139 Phone: 617-621-7527 freema n@merl.com Abstract Yair Weiss Computer Science Division UC Berkeley, 485 Soda Hall Berkeley, CA 94720-1776 Phone: 510-642-5029 yweiss@cs.berkeley.edu Belief propagation (BP) was only supposed to work for tree-like networks but works surprisingly well in many applications involving networks with loops, including turbo codes. However, there has been little understanding of the algorithm or the nature of the solutions it finds for general graphs. We show that BP can only converge to a stationary point of an approximate free energy, known as the Bethe free energy in statistical physics. This result characterizes BP fixed-points and makes connections with variational approaches to approximate inference. More importantly, our analysis lets us build on the progress made in statistical physics since Bethe's approximation was introduced in 1935. Kikuchi and others have shown how to construct more accurate free energy approximations, of which Bethe's approximation is the simplest. Exploiting the insights from our analysis, we derive generalized belief propagation (GBP) versions ofthese Kikuchi approximations. These new message passing algorithms can be significantly more accurate than ordinary BP, at an adjustable increase in complexity. We illustrate such a new GBP algorithm on a grid Markov network and show that it gives much more accurate marginal probabilities than those found using ordinary BP. 1 Introduction Local "belief propagation" (BP) algorithms such as those introduced by Pearl are guaranteed to converge to the correct marginal posterior probabilities in tree-like graphical models. For general networks with loops, the situation is much less clear. On the one hand, a number of researchers have empirically demonstrated good performance for BP algorithms applied to networks with loops. One dramatic case is the near Shannon-limit performance of "Turbo codes" , whose decoding algorithm is equivalent to BP on a loopy network [2, 6]. For some problems in computer vision involving networks with loops, BP has also shown to be accurate and to converge very quickly [2, 1, 7]. On the other hand, for other networks with loops, BP may give poor results or fail to converge [7]. For a general graph, little has been understood about what approximation BP represents, and how it might be improved. This paper's goal is to provide that understanding and introduce a set of new algorithms resulting from that understanding. We show that BP is the first in a progression of local message-passing algorithms, each giving equivalent results to a corresponding approximation from statistical physics known as the "Kikuchi" approximation to the Gibbs free energy. These algorithms have the attractive property of being user-adjustable: by paying some additional computational cost, one can obtain considerable improvement in the accuracy of one's approximation, and can sometimes obtain a convergent message-passing algorithm when ordinary BP does not converge. 2 Belief propagation fixed-points are zero gradient points of the Bethe free energy We assume that we are given an undirected graphical model of N nodes with pairwise potentials (a Markov network). Such a model is very general, as essentially any graphical model can be converted into this form. The state of each node i is denoted by Xi, and the joint probability distribution function is given by 1 P(Xl,X2, ... ,XN) = z II 'l/Jij(Xi,Xj) II 'l/Ji(Xi) ij i (1) where 'l/Ji(Xi) is the local "evidence" for node i, 'l/Jij(Xi, Xj) is the compatibility matrix between nodes i and j, and Z is a normalization constant. Note that we are subsuming any fixed evidence nodes into our definition of 'l/Ji(Xi). The standard BP update rules are: mij(Xj) fa L 'l/Jij (Xi, Xj)'l/Ji(Xi) II mki(xi) (2) Xi kEN(i)\j bi(Xi) fa'I/Ji (Xi) II mki(xi) (3) kEN(i) where a denotes a normalization constant and N(i)\j means all nodes neighboring node i, except j. Here mij refers to the message that node i sends to node j and bi is the belief (approximate marginal posterior probability) at node i, obtained by multiplying all incoming messages to that node by the local evidence. Similarly, we can define the belief bij(Xi,Xj) at the pair of nodes (Xi, Xj) as the product of the local potentials and all messages incoming to the pair of nodes: bij(Xi, Xj) = acPij(Xi, Xj) ITkEN(i)\j mki(Xi) IT1EN(j)\i mlj (Xj), where cPij(Xi,Xj) = 'l/Jij(Xi,Xj)'l/Ji(Xi)'l/Jj(Xj). Claim 1: Let {mij} be a set of BP messages and let {bij , bi} be the beliefs calculated from those messages. Then the beliefs are fixed-points of the BP algorithm if and only if they are zero gradient points of the Bethe free energy, Ff3: L L bij(Xi,Xj) [In bij(Xi, Xj) -lncPij(xi,Xj)] ij Xi,Xj Xi subject to the normalization and marginalization constraints: LXi bi(Xi) = 1, L Xi bij(Xi,Xj) = bj(xj). (qi is the number of neighbors of node i.) To prove this claim we add Lagrange multipliers to form a Lagrangian L: Aij (x j ) is the multiplier corresponding to the constraint that bij (Xi, X j) marginalizes down to bj(xj), and "(ij, "(i are multipliers corresponding to the normalization constraints. The equation 8b.fL .j = 0 gives: Inbij(xi,Xj) = In(¢ij(xi,Xj)) + Aij(Xj) + IJ X"'X J Aji(Xi) + "(ij - 1. The equation 8b~&i) = 0 gives: (qi - l)(lnbi(xi) + 1) = In?jJi (Xi) + LjEN(i) Aji (Xi) + "(i· Setting Aij (Xj) = In OkEN(j)\i mkj (Xj) and using the marginalization constraints, we find that the stationary conditions on the Lagrangian are equivalent to the BP fixed-point conditions. (Empirically, we find that stable BP fixed-points correspond to local minima of the Bethe free energy, rather than maxima or saddle-points.) 2.1 Implications The fact that F/3( {bij , bd) is bounded below implies that the BP equations always possess a fixed-point (obtained at the global minimum of F). To our knowledge, this is the first proof of existence of fixed-points for a general graph with arbitrary potentials (see [9] for a complicated prooffor a special case). The free energy formulation clarifies the relationship to variational approaches which also minimize an approximate free energy [3]. For example, the mean field approximation finds a set of {bi } that minimize: FMF({bd) = - L L bi(Xi)bj(xj) In?jJij(xi,Xj)+ L L bi(Xi) [lnbi(xi) -In?jJi(xi)] ij Xi,'Xj i Xi (5) subject to the constraint Li bi(Xi) = 1. The BP free energy includes first-order terms bi(Xi) as well as second-order terms bij (Xi, Xj), while the mean field free energy uses only the first order ones. It is easy to show that the BP free energy is exact for trees while the mean field one is not. Furthermore the optimization methods are different: typically FMF is minimized directly in the primal variables {bi} while F/3 is minimized using the messages, which are a combination of the dual variables {Aij(Xj)}. Kabashima and Saad [4] have previously pointed out the correspondence between BP and the Bethe approximation (expressed using the TAP formalism) for some specific graphical models with random disorder. Our proof answers in the affirmative their question about whether there is a "deep general link between the two methods." [4] 3 Kikuchi Approximations to the Gibbs Free Energy The Bethe approximation, for which the energy and entropy are approximated by terms that involve at most pairs of nodes, is the simplest version of the Kikuchi "cluster variational method." [5, 10] In a general Kikuchi approximation, the free energy is approximated as a sum of the free energies of basic clusters of nodes, minus the free energy of over-counted cluster intersections, minus the free energy of the over-counted intersections of intersections, and so on. Let R be a set of regions that include some chosen basic clusters of nodes, their intersections, the intersections of the intersections, and so on. The choice of basic clusters determines the Kikuchi approximation- for the Bethe approximation, the basic clusters consist of all linked pairs of nodes. Let Xr be the state of the nodes in region r and br(xr) be the "belief" in Xr. We define the energy of a region by Er(xr) == -In TIij 'l/Jij (Xi, Xj) - In TIi 'l/Ji(Xi) == -In 'l/Jr(xr), where the products are over all interactions contained within the region r. For models with higher than pair-wise interactions, the region energy is generalized to include those interactions as well. The Kikuchi free energy is FK = 2:: Cr (2:: br(xr)Er(xr) + 2:: br(xr) IOgbr(Xr)) rER x.,. XT' (6) where Cr is the over-counting number of region r, defined by: Cr = 1- LSEStLper(r) Cs where super(r) is the set of all super-regions of r. For the largest regions in R, Cr = 1. The belief br (Q:r ) in region r has several constraints: it must sum to one and be consistent with the beliefs in regions which intersect with r. In general, increasing the size of the basic clusters improves the approximation one obtains by minimizing the Kikuchi free energy. 4 Generalized belief propagation (G BP) Minimizing the Kikuchi free energy subject to the constraints on the beliefs is not simple. Nearly all applications of the Kikuchi approximation in the physics literature exploit symmetries in the underlying physical system and the choice of clusters to reduce the number of equations that need to be solved from O(N) to 0(1). But just as the Bethe free energy can be minimized by the BP algorithm, we introduce a class of analogous genemlized belief propagation (GBP) algorithms that minimize an arbitrary Kikuchi free energy. These algorithms represent an advance in physics, in that they open the way to the exploitation of Kikuchi approximations for inhomogeneous physical systems. There are in fact many possible GBP algorithms which all correspond to the same Kikuchi approximation. We present a "canonical" GBP algorithm which has the nice property of reducing to ordinary BP at the Bethe level. We introduce messages mrs(xs) between all regions r and their "direct sub-regions" s. (Define the set subd(r) of direct sub-regions of r to be those regions that are sub-regions of r but have no super-regions that are also sub-regions of r, and similarly for the set superd(r) of "direct super-regions.") It is helpful to think of this as a message from those nodes in r but not in s (which we denote by r\s) to the nodes in s. Intuitively, we want messages to propagate information that lies outside of a region into it. Thus, for a given region r, we want the belief br(xr) to depend on exactly those messages mr,s, that start outside ofthe region r and go into the region r. We define this set of messages M(r) to be those messages mr,s, (xs,) such that region r'\s' has no nodes in common with region r, and such that region s' is a sub-region of r or the same as region r. We also define the set M(r, s) of messages to be all those messages that start in a sub-region of r and also belong to M(s), and we define M(r)\M(s) to be those messages that are in M(r) but not in M(s). The canonical generalized belief propagation update rules are: mrs +Q: [2:: 'l/Jr\s(xr\s) II mrllsll] / II mr,s, (7) X'\' m,1I ," EM(r)\M(s) m,' " EM(r,s) br +Q:'l/Jr(xr) II mr,s, (8) m" . ,EM(r) where for brevity we have suppressed the functional dependences of the beliefs and messages. The messages are updated starting with the messages into the smallest regions first. One can then use the newly computed messages in the product over M(r, s) of the message-update rule. Empirically, this helps convergence. Claim 2: Let {mrs(xs)} be a set of canonical GBP messages and let {br(xr)} be the beliefs calculated from those messages. Then the beliefs are fixed-points of the canonical GBP algorithm if and only if they are zero gradient points of the constrained Kikuchi free energy FK. We prove this claim by adding Lagrange multipliers: 'Yr to enforce the normalization of br and Ars(Xs) to enforce the consistency of each region r with all of its direct sub-regions s. This set of consistency constraints is actually more than sufficient, but there is no harm in adding extra constraints. We then rotate to another set of Lagrange multipliers /l-rs(xs) of equal dimensionality which enforce a linear combination of the original constraints: /l-rs (xs) enforces all those constraints involving marginalizations by all direct super-regions r' of s into s except that of region r itself. The rotation matrix is in a block form which can be guaranteed to be full rank. We can then show that the /l-rs(xs) constraints can be written in the form /l-rs(xs) Er'ER(/-Iro) Cr, Ex., b(x~) where R(/l-rs) is the set of all regions which receive the message /l-rs in the belief update rule of the canonical algorithm. We then re-arrange the sum over all /l-'S into a sum over all regions, which has the form ErER Cr Ex. br(xr) E/-I .. EM(r) /l-rs(Xs). (M(r) is a set of /l-r's' in one-toone correspondence with the mr,s, in M(r).) Finally, we differentiate the Kikuchi free energy with respect to br(r), and identify /l-rs(xs) = lnmrs(xs) to obtain the canonical GBP belief update rules, Eq. 8. Using the belief update rules in the marginalization constraints, we obtain the canonical GBP message update rules, Eq.7. It is clear from this proof outline that other GBP message passing algorithms which are equivalent to the Kikuchi approximation exist. If one writes any set of constraints which are sufficient to insure the consistency of all Kikuchi regions, one can associate the exponentiated Lagrange multipliers of those constraints with a set of messages. The GBP algorithms we have described solve exactly those networks which have the topology of a tree of basic clusters. This is reminiscent of Pearl's method of clustering [8], wherein wherein one groups clusters of nodes into "super-nodes," and then applies a belief propagation method to the equivalent super-node lattice. We can show that the clustering method, using Kikuchi clusters as super-nodes, also gives results equivalent to the Kikuchi approximation for those lattices and cluster choices where there are no intersections between the intersections of the Kikuchi basic clusters. For those networks and cluster choices which do not obey this condition, (a simple example that we discuss below is the square lattice with clusters that consist of all square plaquettes of four nodes), Pearl's clustering method must be modified by adding additional update conditions to agree with GBP algorithms and the Kikuchi approximation. 5 Application to Specific Lattices We illustrate the canonical G BP algorithm for the Kikuchi approximation of overlapping 4-node clusters on a square lattice of nodes. Figure 1 (a), (b), (c) illustrates the beliefs at a node, pair of nodes, and at a cluster of 4 nodes, in terms of messages propagated in the network. Vectors are the single index messages also used in ordinary BP. Vectors with line segments indicate the double-indexed messages arising from the Kikuchi approximation used here. These can be thought of as correction terms accounting for correlations between messages that ordinary BP treats as independent. (For comparison, Fig. 1 (d), (e), (f) shows the corresponding marginal computations for the triangular lattice with all triangles chosen as the basic Kikuchi clusters). We find the message update rules by equating marginalizations of Fig. 1 (b) and (c) with the beliefs in Fig. 1 (a) and (b), respectively. Figure 2 (a) and (b) show (graphically) the resulting fixed point equations. The update rule (a) is like that for ordinary BP, with the addition of two double-indexed messages. The update rule for the double-indexed messages involves division by the newly-computed singleindexed messages. Fixed points of these message update equations give beliefs that are stationary points (empirically minima) of the corresponding Kikuchi approximation to the free energy. If , J J 9 ~CJ,~ (b) , l!J~, t- O...=J l 1..t (c) b~.L. 1\ (d) >\I.+~L !\~/\ (e) Figure 1: Marginal probabilities in terms of the node links and GBP messages. For (a) node, (b) line, (c) square cluster, using a Kikuchi approximation with 4-node clusters on a square lattice. E.g., (b) depicts (a special case of Eq. 8, written here using node labels): bab(Xa, Xb) = a'I/Jab(Xa, Xb)'l/Ja(Xa)'l/Jb(Xb)M~M~M~M:t Mt M~ Mt: M~~ , where super and subscripts indicate which nodes message M goes from and to. (d), (e), (f): Marginal probabilities for triangular lattice with 3-node Kikuchi clusters. a b .(a) Figure 2: Graphical depiction of message update equations (Eq. 7; marginalize over nodes shown unfilled) for GBP using overlapping 4-node Kikuchi clusters. (a) Update equation for the single-index messages: M!(xa) = a L:xb 'l/Jb(Xb)'l/Jab(Xa, xb)M:t Mt M~ Mt: M~~. (b) Update equation for doubleindexed messages (involves a division by the single-index messages on the left hand side). 6 Experimental Results Ordinary BP is expected to perform relatively poorly for networks with many tight loops, conflicting interactions, and weak evidence. We constructed such a network, known in the physics literature as the square lattice Ising spin glass in a random magnetic field. The nodes are on a square lattice, with nearest neighbor nodes ( exp(J,··) exp(-J, .. )) connected by a compatibility matrix of the form 'l/Ji3" = ( 13J, ) (J, )13 exp ij exp ij and local evidence vectors of the form 'l/Ji = (exp(hi)jexp(-hi )). To instantiate a particular network, the Jij and hi parameters are chosen randomly and independently from zero-mean Gaussian probability distributions with standard deviations J and h respectively. The following results are for n by n lattices with toroidal boundary conditions and with J = 1, and h = 0.1. This model is designed to show off the weaknesses of ordinary BP, which performs well for many other networks. Ordinary BP is a special case of canonical G BP, so we exploited this to use the same general-purpose GBP code for both ordinary BP and canonical GBP using overlapping square fournode clusters, thus making computational cost comparisons reasonable. We started with randomized messages and only stepped half-way towards the computed values of the messages at each iteration in order to help convergence. We found that canonical G BP took about twice as long as ordinary BP per iteration, but would typically reach a given level of convergence in many fewer iterations. In fact, for the majority of the dozens of samples that we looked at, BP did not converge at all, while canonical GBP always converged for this model and always to accurate answers. (We found that for the zero-field 3-dimensional spin glass with toroidal boundary conditions, which is an even more difficult model, canonical GBP with 2x2x2 cubic clusters would also fail to converge). For n = 20 or larger, it was difficult to make comparisons with any other algorithm, because ordinary BP did not converge and Monte Carlo simulations suffered from extremely slow equilibration. However, generalized belief propagation converged reasonably rapidly to plausible-looking beliefs. For small n, we could compare with exact results, by using Pearl's clustering method on a chain of n by 1 super-nodes. To give a qualitative feel for the results, we compare ordinary BP, canonical GBP, and the exact results for an n = 10 lattice where ordinary BP did converge. Listing the values of the one-node marginal probabilities in one of the rows, we find that ordinary BP gives (.0043807, .74502, .32866, .62190, .37745, .41243, .57842, .74555, .85315, .99632), canonical GBP gives (.40255, .54115, .49184, .54232, .44812, .48014, .51501, .57693, .57710, .59757), and the exact results were (.40131, .54038, .48923, .54506, .44537, .47856, .51686, .58108, .57791, .59881). References [1] W. T. Freeman and E. Pasztor. Learning low-level vision. In 7th Intl. Conf. Computer Vision, pages 1182- 1189, 1999. [2] B. J. Frey. Graphical Models for Machine Learning and Digital Communication. MIT Press, 1998. [3] M. Jordan, Z. Ghahramani, T. Jaakkola, and 1. Saul. An introduction to variational methods for graphical models. In M. Jordan, editor, Learning in Graphical Models. MIT Press, 1998. [4] Y. Kabashima and D. Saad. Belief propagation vs. TAP for decoding corrupted messages. Euro. Phys. Lett., 44:668, 1998. [5] R. Kikuchi. Phys. Rev., 81:988, 1951. [6] R. McEliece, D. MacKay, and J. Cheng. Turbo decoding as an instance of Pearl's 'belief propagation' algorithm. IEEE J. on Sel. Areas in Comm., 16(2):140- 152, 1998. [7] K. Murphy, Y. Weiss, and M. Jordan. Loopy belief propagation for approximate inference: an empirical study. In Proc. Uncertainty in AI, 1999. [8] J. Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann, 1988. [9] T. J. Richardson. The geometry of turbo-decoding dynamics. IEEE Trans. Info. Theory, 46(1):9-23, Jan. 2000. [10] Special issue on Kikuchi methods. Progr. Theor. Phys. Suppl., vol. 115, 1994.
2000
54
1,855
Multiple times cales of adaptation in a neural code Adrienne L. Fairhall, Geoffrey D. Lewen, William Bialek, and Robert R. de Ruyter van Steveninck NEe Research Institute 4 Independence Way Princeton, New Jersey 08540 adrienne!geofflbialeklruyter@ research. nj. nec. com Abstract Many neural systems extend their dynamic range by adaptation. We examine the timescales of adaptation in the context of dynamically modulated rapidly-varying stimuli, and demonstrate in the fly visual system that adaptation to the statistical ensemble of the stimulus dynamically maximizes information transmission about the time-dependent stimulus. Further, while the rate response has long transients, the adaptation takes place on timescales consistent with optimal variance estimation. 1 Introduction Adaptation was one of the first phenomena discovered when Adrian recorded the responses of single sensory neurons [1, 2]. Since that time, many different forms of adaptation have been found in almost all sensory systems. The simplest forms of adaptation, such as light and dark adaptation in the visual system, seem to involve just discarding a large constant background signal so that the system can maintain sensitivity to small changes. The idea of Attneave [3] and Barlow [4] that the nervous system tries to find an efficient representation of its sensory inputs implies that neural coding strategies should be adapted not just to constant parameters such as the mean light intensity, but to the entire distribution of input signals [5]; more generally, efficient strategies for processing (not just coding) of sensory signals must also be matched to the statistics of these signals [6]. Adaptation to statistics might happen on evolutionary time scales, or, at the opposite extreme, it might happen in real time as an animal moves through the world. There is now evidence from several systems for real time adaptation to statistics [7, 8, 9], and at least in one case it has been shown that the form of this adaptation indeed does serve to optimize the efficiency of representation, maximizing the information that a single neuron transmits about its sensory inputs [10]. Perhaps the simplest of statistical adaptation experiments, as in Ref [7] and Fig. 1, is to switch between stimuli that are drawn from different probability distributions and ask how the neuron responds to the switch. When we 'repeat' the experiment we repeat the time dependence of the parameters describing the distribution, but we choose new signals from the same distributions; thus we probe the response or adaptation to the distributions and not to the particular signals. These switching experiments typically reveal transient responses to the switch that have rather long time scales, and it is tempting to identify these long time scales as the time scales of adaptation. On the other hand, one can also view the process of adapting to a distribution as one of learning the parameters of that distribution, or of accumulating evidence that the distribution has changed. Some features of the dynamics in the switching experiments match the dynamics of an optimal statistical estimator [11], but the overall time scale does not: for all the experiments we have seen, the apparent time scales of adaptation in a switching experiment are much longer than would be required to make reliable estimates of the relevant statistical parameters. In this work we re-examine the phenomena of statistical adaptation in the motion sensitive neurons of the fly visual system. Specifically, we are interested in adaptation to the variance or dynamic range of the velocity distribution [10]. It has been shown that, in steady state, this adaptation includes a rescaling of the neuron's input/output relation, so that the system seems to encode dynamic velocity signals in relative units; this allows the system, presumably, to deal both with the "-' 50° /s motions that occur in straight flight and with the "-' 2000° /s motions that occur during acrobatic flight (see Ref.[12]). Further, the precise form of rescaling chosen by the fly's visual system is that which maximizes information transmission. There are several natural questions: (1) How long does it take the system to accomplish the rescaling of its input/output relation? (2) Are the transients seen in switching experiments an indication of gradual rescaling? (3) If the system adapts to the variance of its inputs, is the neural signal ambiguous about the absolute scale of velocity? (4) Can we see the optimization of information transmission occurring in real time? 2 Stimulus structure and experimental setup A fly (Calliphora vicina) is immobilized in wax and views a computer controlled oscilloscope display while we record action potentials from the identified neuron HI using standard methods. The stimulus movie is a random pattern of dark and light vertical bars, and the entire pattern moves along a random trajectory with velocity S(t); since the neuron is motion (and not position) sensitive we refer to this signal as the stimulus. We construct the stimulus S(t) as the product of a normalized white noise s(t), constructed from a random number sequence refreshed every Ts = 2 ms, and an amplitude or standard deviation (J'(t) which varies on a characteristic timescale Ta » Ts. Frames of the movie are drawn every 2 ms. For analysis all spike times are discretized at the 2 ms resolution of the movie. ~ "0 Q) 0 _!!1 til E o -1 Z a) T = 20s --- T= lOs ---T=4s -2+-------~------~------~------~ 0.00 0.25 0.50 Normalised time 1fT 0.75 1.00 10 20 30 40 Period T (sec) Figure 1: (a) The spike rate measured in response to a square-wave modulated white noise stimulus set), averaged over many presentations of set), and normalized by the mean and standard deviation. (b) Decay time of the rate following an upward switch as a function of switching period T. 3 Spike rate dynamics Switching experiments as described above correspond to a stimulus such that the amplitude (J'(t) is a square wave, alternating between two values (J'l and (J'2, (J'l > (J'2. Experiments were performed over a range of switching periods (T = 40, 20, 10, 4 s), with the amplitudes (J'l and (J'2 in a ratio of 5:1. Remarkably, the timescales of the response depend strongly on those of the experiment; in fact, the response times rescale by T, as is seen in Fig. lea). The decay of the rate in the first half of the experiment is fitted by an exponential, and in Fig. l(b), the resulting decay time T(T) is plotted as a function of T; we use an exponential not to insist that this is the correct form, only to extract a timescale. As suggested by the rescaling of Fig. lea), the fitted decay times are well described as a linear function of the stimulus period. This demonstrates that the timescale of adaptation of the rate is not absolute, but is a function of the timescale established in the experiment. Large sudden changes in stimulus variance might trigger special mechanisms, so we turn to a signal that changes variance continuously: the amplitude (J'(t) is taken to be the exponential of a sinusoid, (J'(t) = exp(asin(27l'kt)), where the period T = 11k was varied between 2 s and 240 s, and the constant a is fixed such that the amplitude varies by a factor of 10 over a cycle. A typical averaged rate response to the exponential-sinusoidal stimulus is shown in Fig. 2(a). The rate is close to sinusoidal over this parameter regime, indicating a logarithmic encoding of the stimulus variance. Significantly, the rate response shows a phase lead ~ <I> with respect to the stimulus. This may be interpreted as the effect of adaptation: at every point on the cycle, the gain of the response is set to a value defined by the stimulus a short time before. 2 a) :;Ql 1 -0 ~ .l!l 0 ~ -0 Ql .!!l -1 'iii E 0 Z -2 0.00 0.25 0.50 Normalised tIT T =30 s --- T=60s - T=90s - T= 120s 0.75 120 b) 100 080 Ql .!E.00 60 :c 40 rn Ql E 20 i= 0 -20 1.00 o 50 100 150 200 250 300 Period T (sec) Figure 2: (a) The spike rate measured in response to a exponential-sinusoidal modulation of a white noise stimulus set), averaged over presentations of set), and normalised by the mean and standard deviation, for several periods T. (b) The time shift 0 between response and stimulus, for a range of periods T. As before, the response of the system was measured over a range of periods T. Fig. 2(b) shows the measured relation of the timeshift 8(T) = T ~<I> of the response as a function of T. One observes that the relation is nearly linear over more than one order of magnitude in T; that is, the phase shift is approximately constant. Once again there is a strong and simple dependence of the apparent timescale of adaptation on the stimulus parameters. Responses to stimulus sequences composed of many frequencies also exhibit a phase shift, consistent with that observed for the single frequency experiments. 4 The dynamic input-output relation Both the switching and sinusoidally modulated experiments indicate that responses to changing the variance of input signals have multiple time scales, ranging from a few seconds to several minutes. Does it really take the system this long to adjust its input/output relation to the new input distribution? In the range of velocities used, and at the contrast level used in the laboratory, spiking in HI depends on features of the velocity waveform that occur within a window of '" 100 ms. After a few seconds, then, the system has had access to several tens of independent samples of the motion signal, and should be able to estimate its variance to within'" 20%; after a minute the precision would be better than a few percent. In practice, we are changing the input variance not by a few percent but a factor of two or ten; if the system were really efficient, these changes would be detected and compensated by adaptation on much shorter time scales. To address this, we look directly at the input/output relation as the standard deviation u(t) varies in time. For simplicity we analyze (as in Ref. [10]) features of the stimulus that modulate the probability of occurrence of individual spikes, P(spikelstimulus); we will not consider patterns of spikes, although the same methods can be easily generalised. The space of stimulus histories of length '" 100 ms, discretised at 2 ms, leading up to a s(like has a dimensionality'" 50, too large to allow adequate sampling of P(spikelstimulus) from the data, so we must begin by reducing the dimensionality of the stimulus description. The simplest way to do so is to find a subset of directions in stimulus space determined to be relevant for the system, and to project the stimulus onto that set of directions. These directions correspond to linear filters. Such a set of directions can be obtained from the moments of the spike-conditional stimulus; the first such moment is the spike-triggered average, or reverse correlation function [2]. It has been shown [10] that for HI, under these conditions, there are two relevant dimensions: a smoothed version of the velocity, and also its derivative. The rescaling observed in steady state experiments was seen to occur independently in both dimensions, so without loss of generality we will use as our filter the single dimension given by the spike-triggered average. The stimulus projected onto this filter will be denoted by So. The filtered stimulus is then passed through a nonlinear decision process akin to a threshold. To calculate the input/output relation P(spikelso) [10], we use Bayes' rule: P(spikelso) P(spike) P(solspike) P(so) (1) The spike rate r(so) is proportional to the probability of spiking, r(so) ex: P(spikelso), leading to the relation r(so) P(solspike) = -----',=-:-----:---'-r P(so) (2) where r is the mean spike rate. P(so) is the (lrior distribution of the projected stimulus, which we know. The distribution P(solspike) is estimated from the projected stimulus evaluated at the spike times, and the ratio of the two is the nonlinear input/output relation. A number of experiments have shown that the filter characteristics of HI are adaptive, and we see this in the present experiments as well: as the amplitude u(t) is decreased, the filter changes both in overall amplitude and shape. The filter becomes increasingly extended: the system integrates over longer periods of time under conditions of low velocities. Thus the filter depends on the input variance, and we expect that there should be an observable relaxation of the filter to its new steady state form after a switch in variance. We find, however, that within 200 ms following the switch, the amplitude of the filter has already adjusted to the new variance, and further that the detailed shape of the filter has attained its steady state form in less than I s. The precise timescale of the establishment of the new filter shape depends on the value of u: for the change to U1, the steady state form is achieved within 200 ms. The long tail of the low variance filter for U2 « (1) is established more slowly. Nonetheless, these time scales which characterize adaptation of the filter are much shorter than the rate transients seen in the switching experiments, and are closer to what we might expect for an efficient estimator. We construct time dependent input/output relations by forming conditional distributions using spikes from particular time slices in a periodic experiment. In Figs. 3.I(b) and 3.I(c), we show the input/output relation calculated in 1 s bins throughout the switching experiment. Within the first second the input/output relation is almost indistinguishable from its steady state form. Further, it takes the same form for the two halves of the experiment: it is rescaled by the standard deviation, as was seen for the steady state experiments. The close collapse or rescaling of the input/output relations depends not only on the normalisation by the standard deviation, but also on the use of the "local" adapted filter (i.e. measured in the same time bin). Returning to the sinusoidal experiments, the input/output relations were 2 . 1a) 1b) ";.-1 10 15 20 10 15 20 25 30 20 40 60 80 10 ,------.--------, r----.,,--------,--,------, r-------------, " > ~('J 001 10 3a) " ..... ili -.:: 0 1 0.01 Figure 3: Input/output relations for (a) switching, (b) sinusoidal and (c) randomly modulated experiments. Figs. 3.1 show the modulation envelope u(t), in log for (b) and (c) (solid), and the measured rate (dotted), normalised by mean and standard deviation. Figs. 3.2 show input/output relations calculated in non-overlapping bins throughout the stimulus cycle, with the input 80 in units of the standard deviation of the whole stimulus. Figs. 3.3 show the input/output relations with the input rescaled to units of the local standard deviation. constructed for T = 45 s in 20 non-overlapping bins of width 2.25 s. Once again the functions show a remarkable rescaling which is sharpened by the use of the appropriate local filter: see Fig.3.2(b) and (c). Finally, we consider an amplitude which varies randomly with correlation time Tu '" 3 s: u(t) is a repeated segment of the exponential of a Gaussian random process, pictured in Fig.3.3(a), with periodicity T = 90s» Tu. Dividing the stimulus into sequential bins of 2 s in width, we obtain the filters for each timeslice, and calculate the local prior distributions, which are not Gaussian in this case as they are distorted by the local variations of u(t). Nonetheless, the ratio P(solspike)j P(so) conspires such that the form of the input/output relation is preserved. In all three cases, our results show that the system rapidly and continuously adjusts its coding strategy, rescaling the input/output relation with respect to the local variance of the input as for steady state stimuli. Variance normalisation occurs as rapidly as is measurable, and the system chooses a similar form for the input/output relation in each case. 5 Information transmission What does this mean for the coding efficiency of the neuron? An experiment was designed to track the information transmission as a function of time. We use a small set of N 2 s long random noise sequences {Si(t)}, i = 1, ... ,N, presented in random order at two different amplitudes, 0"1 and 0"2. We then ask how much information the spike train conveys about (a) which of the random segments Si(t) and (b) which of the amplitudes O"j was used. Specifically, the experiment consists of a series of trials of length 2 s where the fast component is one of the sequences {Si}, and after 1 s, the amplitude switches from 0"1 to 0"2 or vice versa. N was taken to be 40, so that a 2 hour experiment provides approximately 80 samples for each (Si,O"j). This allows us to measure the mutual information between the response and either the fast or the slow component of the stimulus as a function of time across the 2 s repeated segment. We use only a very restricted subspace of 0" and s: the maximum available information about 0" is 1 bit, and about sis log2N. The spike response is represented by "words" [13], generated from the spike times discretised to timebins of 2 ms, where no spike is represented by 0, and a spike by 1. A word is defined as the binary digit formed from 10 consecutive bins, so there are 210 possible words. The information about the fast component S in trials of a given 0" is N Icr(w(t); s) = H[Pcr(w(t))] - L P(sj)H[Pcr(w(t); Sj)]' j=l where H is the entropy of the word distribution: H[P(w(t))] = - L P(Wk(t)) log2 P(Wk(t)). k (3) (4) One can compare this information for different values of 0". Similarly, one can calculate the information about the amplitude using a given probe s: 2 Is (w(t); 0") = H[Ps(w(t))]- L P(O"j)H[Ps(w(t); O"j)]. (5) j=l The amount of information for each S j varies rapidly depending on the presence or absence of spikes, so we average these contributions over the {Sj} to give I(w; 0"). __ I(w;s). upward sWitch o I(w;s). downward SWitch -/>- I(w;,,) -075 -050 -0.25 000 025 0.50 075 Time relative to switch (sec) Figure 4: Information per spike as a function of time where 0" is switched every 2 s. The mutual information as a function of time is shown in Fig. 4, presented as bits/spike. As one would expect, the amount of information transmitted per second about the stimulus details, or s, depends on the ensemble parameter 0": larger velocities allow a higher SNR for velocity estimation, and the system is able to transmit more information. However, when we convert the information rate to bits/spike, we find that the system is transmitting at a constant efficiency of around 1.4 bits/spike. Any change in information rate during a switch from 0"1 to 0"2 is undetectable. For a switch from 0"2 to 0"1, the time to recovery is of order 100 ms. This demonstrates explicitly that the system is indeed rapidly maximising its information transmission. Further, the transient "excess" of spikes following an upward switch provide information at a constant rate per spike. The information about the amplitude, similarly, remains at a constant level throughout the trial. Thus, information about the ensemble variable is retained at all times: the response is not ambiguous with respect to the absolute scale of velocity. Despite the rescaling of input/output curves, responses within different ensembles are distinguishable. 6 Discussion We find that the neural response to a stimulus with well-separated timescales S(t) = O"(t)s(t) takes the form of a ratel8)timing code, where the response r(t) may be approximately modelled as r(t) = R[O"(t)]g (s(t)). (6) Here R modulates the overall rate and depends on the slow dynamics of the variance envelope, while the precise timing of a given spike in response to fast events in the stimulus is determined by the nonlinear input/output relation g, which depends only on the normalised quantity s(t). Through this apparent normalisation by the local standard deviation, g, as for steady-state experiments, maximises information transmission about the fast components of the stimulus. The function R modulating the rate varies on much slower timescales so cannot be taken as an indicator of the extent of the system's adaptation to a new ensemble. Rather, R appears to function as an independent degree of freedom, capable of transmitting information, at a slower rate, about the slow stimulus modulations. The presence of many timescales in R may itself be an adaptation to the many timescales of variation in natural signals. At the same time, the rapid readjustment of the input/output relation - and the consequent recovery of information after a sudden change in 0" - indicate that the adaptive mechanisms approach the limiting speed set by the need to gather statistics. Acknowledgments We thank B. Agiiera y Arcas, N. Brenner and T. Adelman for helpful discussions. References [1] E. Adrian (1928) The Basis of Sensation (London Christophers) [2] F. Rieke, D. Warland, R. de Ruyter van Steveninck and W. Bialek (1997). Spikes: exploring the neural code. (Cambridge, MA: MIT Press). [3] F. Attneave (1954) P5YCh. Rev. 61,183-193. [4] H. B. Barlow (1961) in Sensory Communication, W. A. Rosenbluth, ed. (Cambridge, MA: MIT Press), pp.217-234. [5] S.B. Laughlin (1981) Z. Naturforsch. 36c, 910-912. [6] M. Potters and W. Bialek (1994) 1. Phys. I. France 4, 1755-1775. [7] S. Smirnakis, M. Berry, D. Warland, W. Bialek and M. Meister (1997) Nature 386, 69-73. [8] J. H. van Hateren (1997) Vision Research 37, 3407-3416. [9] R.R. de Ruyter van Steveninck, W. Bialek, M. Potters, and R. H. Carlson (1994) Proc. of the IEEE International Conference on Systems, Man and Cybernetics, 302-307. [10] N. Brenner, W. Bialek and R. de Ruyter van Steveninck (2000), Neuron, 26, 695-702. [11] M. deWeese and A. Zador (1998) Neural Compo 10, 1179-1202. [12] C. Schilstra and J. H. van Hateren (1999) 1. Exp. Biol. 202, 1481-1490. [13] S. Strong, R. Koberle, R. de Ruyter van Steveninck and W. Bialek (1998) Phys. Rev. Lett. 80, 197-200.
2000
55
1,856
Speech Denoising and Dereverberation Using Probabilistic Models Hagai Attias John C. Platt Alex Acero Li Deng Microsoft Research 1 Microsoft Way Redmond, WA 98052 {hagaia,jplatt,alexac,deng} @microsoft.com Abstract This paper presents a unified probabilistic framework for denoising and dereverberation of speech signals. The framework transforms the denoising and dereverberation problems into Bayes-optimal signal estimation. The key idea is to use a strong speech model that is pre-trained on a large data set of clean speech. Computational efficiency is achieved by using variational EM, working in the frequency domain, and employing conjugate priors. The framework covers both single and multiple microphones. We apply this approach to noisy reverberant speech signals and get results substantially better than standard methods. 1 Introduction This paper presents a statistical-model-based algorithm for reconstructing a speech source from microphone signals recorded in a stationary noisy reverberant environment. Speech enhancement in a realistic environment is a challenging problem, which remains largely unsolved in spite of more than three decades of research. Speech enhancement has many applications and is particularly useful for robust speech recognition [7] and for telecommunication. The difficulty of speech enhancement depends strongly on environmental conditions. If a speaker is close to a microphone, reverberation effects are minimal and traditional methods can handle typical moderate noise levels. However, if the speaker is far away from a microphone, there are more severe distortions, including large amounts of noise and noticeable reverberation. Denoising and dereverberation of speech in this condition has proven to be a very difficult problem [4]. Current speech enhancement methods can be placed into two categories: singlemicrophone methods and multiple-microphone methods. A large body of literature exists on single-microphone speech enhancement methods. These methods often use a probabilistic framework with statistical models of a single speech signal corrupted by Gaussian noise [6, 8]. These models have not been extended to dereverberation or multiple microphones. Multiple-microphone methods start with microphone array processing, where an array of microphones with a known geometry is deployed to make both spatial and temporal measurements of sounds. A microphone array offers significant advantages compared to single microphone methods. Non-adaptive algorithms can denoise a signal reasonably well, as long as it originates from a limited range of azimuth. These algorithms do not handle reverberation, however. Adaptive algorithms can handle reverberation to some extent [4], but existing methods are not derived from a principled probabilistic framework and hence may be sub-optimal. Work on blind source separation has attempted to remove the need for fixed array geometries and pre-specified room models. Blind separation attempts the full multi-source, multimicrophone case. In practice, the most successful algorithms concentrate on instantaneous noise-free mixing with the same number of sources as sensors and with very weak probabilistic models for the source [5]. Some algorithms for noisy non-square instantaneous mixing have been developed [1], as well as algorithms for convolutive square noise-free, mixing [9]. However, the full problem including noise and convolution has so far remained open. In this paper, we present a new method for speech denoising and dereverberation. We use the framework of probabilistic models, which allows us to integrate the different aspects of the whole problem, including strong speech models, environmental noise and reverberation, and microphone arrays. This integration is performed in a principled manner facilitating a coherent unified treatment. The framework allows us to produce a Bayes-optimal estimation algorithm. Using a strong speech model leads to computational intractability, which we overcome using a variational approach. The computational efficiency is further enhanced by working in the frequency domain and by employing conjugate priors. The resulting algorithm has complexity O(N log N). Results on noisy speech show significant improvement over standard methods. Due to space limitations, the full derivation and mathematical details for this method are provided in the technical report [3]. Notation and conventions. We work with time series data using a frame-by-frame analysis with N -point frames. Thus, all signals and systems, e.g. Y~' have a time point subscript extending over n = 0, ... , N - 1. With the superscript i omitted, Yn denotes all microphone signals. When n is also omitted, Y denotes all signals at all time points. Superscripts may become subscripts and vice versa when no confusion arises. The discrete Fourier transform (DFf) of Xn is Xk = En exp( -iwkn)Xn. We define the primed quantity p ii~ = 1 - L e-iwknan n=l for variables an with n = 1, ... ,p. (1) The Gaussian distribution for a random vector a with mean fl and precision matrix V (defined as the inverse covariance matrix) is denotedN(a I fl, V). The Gamma distribution for a non-negative random variable v with a degrees of freedom and inverse scale (3 is denoted g(v I a, (3) IX va / 2- 1 exp( -(3v/2). Their product, the Normal-Gamma distribution Ng(a, v I fl, V, a, (3) = N(a I fl, vV)g(v I a, (3) , (2) turns out to be particularly useful. Notice that it relates the precision of a to v. Problem Formulation We consider the case where a single speech source is present and M microphones are available. The treatment of the single-microphone case is a special case of M = 1, but is not qualitatively different. Let Xn be the signal emitted by the source at time n, and let y~ be the signal received at microphone i at the same time. Then y~ = h~ * Xn + u~ = L h~xn-m + u~ , (3) m where h:'" is the impulse response of the filter (of length Ki ~ N) operating on the source as it propagates toward microphone i, * is the convolution operator, and u~ denotes the noise recorded at that microphone. Noise may originate from both microphone responses and from environmental sources. In a given environment, the task is to provide an optimal estimate of the clean speech signal x from the noisy microphone signals yi. This requires the estimation of the convolving filters hi and characteristics of the noise ui . This estimation is accomplished by Bayesian inference on probabilistic models for x and ui . 2 Probabilistic Signal Models We now turn to our model for the speech source. Much of the work on speech denoising in the past has usually employed very simple source models: AR or ARMA descriptions [6]. One exception is [8], which uses an HMM whose observations are Gaussian AR models. These simple denoising models incorporate very little information on the structure of speech. Such an approach a priori allows any value for the model coefficients, including values that are unlikely to occur in a speech signal. Without a strong prior, it is difficult to estimate the convolving filters accurately due to identifiability. A source prior is especially important in the single microphone case, which estimates N clean samples plus model coefficients from N noisy samples. Thus, the absence of a strong speech model degrades reconstruction quality. The most detailed statistical speech models available are those employed by state-of-theart speech recognition engines. These systems are generally based on mixture of diagonal Gaussian models in the mel-cepstral domain. These models are endowed with temporal Markov dynamics and have a very large (f'.:::l 100000) number of states corresponding to individual atoms of speech. However, in the mel-cepstral domain, the noisy reverberant speech has a strong non-linear relationship to the clean speech. Physical speech production model. In this paper, we work in the linear time/frequency domain using a statistical model and take an intermediate approach regarding the model size. We model speech production with an AR(P) model: p Xn = L amXn-m +Vn , m=l (4) where the coefficients am are related to the physical shape of a "lossless tube" model of the vocal tract. To tum this physical model into a probabilistic model, we assume that Vn are independent zero-mean Gaussian variables with scalar precision v. Each speech frame x = (xo, ... ,XN-l) has its own parameters (J = (al, ... , ap , v). Given (J, the joint distribution of x is generally a zero-mean Gaussian, p(x 1 (J) = N(x 1 0, A), where A is the N x N precision matrix. Specifically, the joint distribution is given by the product p(x 1 (J) = IT N(xn 1 L amXn-m, v). (5) n m Probabilistic model in the frequency domain. However, rather than employing this product form directly, we work in the frequency domain and use the DFf to write N-l p(x 1 (J) ()( exp( - 2~ L 1 ii~ 121 Xk 12) , (6) k=O where ii~ is defined in (1). The precision matrix A is now given by an inverse DFf, Anm = (v/N)I:keiWk(n-m) 1 ii~ 12. This matrix belongs to a sub-class of Toeplitz matrices called circulant Toeplitz. It follows from (6) that the mean power spectrum of x is related to (J via Sk = (I Xk 12) = N/(v 1 ii~ 12). Conjugate priors. To complete our speech model, we must specify a distribution over the speech production parameters O. We use a S-state mixture model with a Normal-Gamma distribution (2) for each component s = 1, '''' S: p(O 1 s) = N(al' "" ap 1 /-Ls, vVs)Q(v 1 O:s, (3s). This form is chosen by invoking the idea of a conjugate prior, which is defined as follows. Given the model p(x 1 O)p( ° 1 s), the prior p( ° 1 s) is conjugate to p(x 1 0) iff the posterior p(O 1 x, s), computed by Bayes' rule, has the same functional form as the prior. This choice has the advantage of being quite general while keeping the clean speech model analytically tractable. It turns out, as discussed below, that significant computational savings result if we restrict the p x p precision matrices Vs to have a circulant Toeplitz structure. To do this without having to impose an explicit constraint, we reparametrize p(O 1 s) in terms of ~;, 'f/; instead of /-L;, V':m' and work in the frequency domain: p-l p(O 1 s) ex exp(-~ L: 1 ~kak - iik 12) , v-~ exp(_(3s v) . 2p k=O 2 (7) Note that we use a p- rather than N -point DFf. The precisions are now given by the inverse DFT V':m = (lip) Lk eiWk(n-m) 1 ~k 12 and are manifestly circulant. It is easy to show that conjugacy still holds. Finally, the mixing fractions are given by p( s) = 7r s . This completes the specification of our clean speech modelp(x) in terms of the latent variable modelp(x, 0, s) = p(x 1 O)p(O 1 s)p(s). The model is parametrized by W = (~~, 'f/~, O:s, (3s, 7rs). Speech model training. We pre-train the speech model parameters W using 10000 sentences of the Wall Street Journal corpus, recorded with a close-talking microphone for 150 male and female speakers of North American English. We used 16msec overlapping frames with N = 256 time points at 16kHz sampling rate. Training was performed using an EM algorithm derived specifically for this model [3]. We used S = 256 clusters and p = 12. W were initialized by extracting the AR(P) coefficients from each frame using the autocorrelation method. These coefficents were converted into cepstral coefficients, and clustered into S classes by k-means clustering. We then considered the corresponding hard clusters of the AR(p) coefficients, and separately fit a model p(O 1 s) (7) to each. The resulting parameters were used as initial values for the full EM algorithm. Noise model. In this paper, we use an AR(q) description for the noise recorded by microphone i, u~ = Lm b~u~_m + w~. The noise parameters are ¢>i = (b~, Ai), where Ai are the precisions of the zero-mean Gaussian excitations w~ . In the frequency domain we have the joint distribution . N-l p(ui 1 ¢i) ex exp( - 2~ L: 1 b~,k 121 u~ 12) , (8) k=O As in (6), the parameters ¢i determine the spectra of the noise. But unlike the speech model, the AR(q) noise model is chosen for mathematical convenience rather than for its relation to an underlying physical model. Noisy speech model. The form (8) now implies that given the clean speech x, the distribution of the data yi is . N-l ( i I) (N "" 1 -b' 121 -i h- i 12) pyx ex exp - 2N L...J i,k Yk kXk . (9) k=O This completes the specification of our noisy speech model p(y) in terms of the joint distribution Oi p(yi 1 x )p( x 1 O)p( ° 1 s )p( s). 3 Variational Speech Enhancement (VSE) Algorithm The denoising and dereverberation task is accomplished by estimating the clean speech x, which requires estimating the speech parameters 8, the filter coefficients hi, and the noise parameters qi. These tasks can be performed by the EM algorithm. This algorithm receives the data yi from an utterance (a long sequence of frames) as input and proceeds iteratively. In the E-step, the algorithm computes the sufficient statistics of the clean speech x and the production parameters 8 for each frame. In the M-step, the algorithm uses the sufficient statistics to update the values of hi and <pi, which are assumed unchanged throughout the utterance. This assumption limits the current VSE algorithm to stationary noise and reverberation. Source reconstruction is performed as a by-product of the E-step. Intractability and variational EM. In the clean speech model p( x) above, inference (i.e. computing p(s, 8 1 x) for the observed clean speech x) is tractable. However, in the noisy case, x is hidden and consequently inference becomes intractable. The posterior p(s, 8, x 1 y) includes a quartic term exp(x282), originating from the product of two Gaussian variables, which causes the intractability. To overcome this problem, we employ a variational approach [10]. We replace the exact posterior distribution over the hidden variables by an approximate one, q(s, 8, x 1 y), and select the optimal q by maximizing F[q] = l:!dXdf) q(s,8,x 1 y)log p?,:,x'IY)) s q s, ,x y (10) w.r.t. q. To achieve tractability, we must restrict the space of possible q. We use the partially factorized form q = q(s)q(8 1 s)q(x 1 s) , (11) where the y-dependence of q is omitted. Given y, this distribution defines a mixture model for x and a mixture model for 8, while maintaining correlations between x and 8 (i.e., q(x,8) :j:. q(x)q(8). Maximizing F is equivalent to minimizing the KL distance between q and the exact conditional p(s, 8, x 1 y) under the restriction (11). With no further restriction, the functional form of q falls out of free-form optimization, as shown in [2]. For the production parameters, q(8 1 s) turns out to have the form q(8 1 s) = N(al, ... , ap 1 {is, vVs)9(v 1 Ct.s, Ss). This form is functionally identicalto that of the prior p( 8 1 s), consistent with the conjugate prior idea. The parameters of q are distinguished from the prior's by the A symbol. Similarly, the state responsibilities are q( s) = * s. For the clean speech, we obtain Gaussians, q(x 1 s) = N(x 1 Ps, As), with state-dependent means and precisions. E-step and Wiener filtering. To derive the E-step, we first ignore reverberation by setting h~ = In,o and assuming a single microphone signal Yn, thus omitting i. The extension to multiple microphones and reverberation is straightforward. The parameters of q are estimated at the E-step from the noisy speech in each frame, using an iterative algorithm. First, the parameters of q( 8 1 s) are updated via Vs = Rs + Vs , {is = Vs-l(rs + VsILs) , (12) where R~m = (liN) 2:k eiwk(n-m) Es(1 Xk 12 ) , r~ = R~o, and Es denotes averaging w.r.t. q(x 1 s), which is easily done analytically. The update rules for Ct.s, Ss, *s are shown in [3]. Next, the parameters of q(x 1 s) are obtained by inverse DFT via (13) where J: = A 1 b~ 12 tfJZ, and gZ = A 1 b~ 12 +Es(v 1 ii~ 12). Here Es denotes averaging w.r.t. q((} 1 s). These steps are iterated to convergence, upon which the estimated speech signal for this frame is given by the weighted sum x = Es 1rsPs. We point out that the correspondence between AR parameters and spectra implies the Wiener filter form J: = Sz/(SZ + Nk), where Si. is the estimated clean speech spectrum associated with state s, and Nk is the noise spectrum, both at frequency Wk. Hence, the updated Ps in (13) is obtained via a state-dependent Wiener filter, and the clean speech is estimated by a sum of Wiener filters weighted by the state responsibilities. The same Wiener structure holds in the presence of reverberation. Notice that, whereas the conventional Wiener filter is linear and obtained directly from the known speech spectrum, our filters depend nonlinearly on the data, since the unknown speech spectra and state responsibilities are estimated iteratively by the above algorithm. M-step. After computing the sufficient statistics of (), x for each frame, ¢i and hi are updated using the whole utterance. The update rules are shown in [3]. Alternatively, the ¢i can be estimated directly by maximum likelihood if a non-speech portion of the input signal can be found. Computational savings. The complexity of the updates for q(x 1 s) and q((} 1 s) is N log Nand Splogp, respectively. This is due to working in the frequency domain, using the FFf algorithm to perform the DFf, and by using conjugate priors and circulant precisions. Working in the time domain and using priors with general precisions would result in the considerably higher complexity of N 2 and Sp3, respectively. 4 Experiments Denoising. We tested this algorithm on 150 speech sentences by male and female speakers from the Wall Street Journal (WSJ) database, which were not included in the training set. These sentences were distorted by adding either synthetic noise (white or pink), or noise recorded in an office environment with a PC and air conditioning. The distortions were applied at different SNRs. All of these noises were stationary. We then applied the algorithm to estimate the noise parameters and reconstruct the original speech signal. The result was compared with a sophisticated, subband-based implementation of the spectral subtraction (SS) technique. Denoising & Dereverberation. We tested this algorithm on 100 WSJ sentences, which were distorted by convolving them with a lO-tap artificial filter and adding synthetic white Gaussian noise at different SNRs. We then applied the algorithm to estimate both the noise level and the filter. Here we used a simpler speech model with p( () 1 s) = 8 (() - () s). Speech Recognition. We also examined the potential contribution of this algorithm to robust speech recognition, by feeding the denoised signals as inputs to a recognition system. The system used a version of the Microsoft continuous-density HMM (Whisper), with 6000 tied HMM states (senones), 20 Gaussians per state, and the speech represented via Mel-cepstrum, delta cepstrum, and delta-delta cepstrum. A fixed bigram language model is used in all the experiments. The system had been trained on a total of 16,000 female clean speech sentences. The test set consisted of 167 female WSJ sentences, which were distorted by adding synthetic white non-Gaussian noise. The word error rate was 55.06% under the training-test mismatched condition of no preprocessing on the test set and decoding by HMMs trained with clean speech. This condition is the baseline for the relative performance improvement listed in the last row of Table 1. For these experiments, we compared VSE to the SS algorithm described in [7]. Table 1 shows that the Variational Speech Enhancement (VSE) algorithm is superior to SS at removing stationary noise either measured via SNR improvement or via relative reduction in speech recognition error rate (compared to baseline). dB noise reverb SS SS VSE VSE added added synthetic real synthetic real noise noise noise noise SNR improvement 5 No 4.3 4.3 6.0 5.5 SNR improvement 10 No 4.1 4.1 5.8 5.1 SNR improvement 5 Yes 6.7 10.2 SNR improvement 10 Yes 8.3 13.2 Speech recognition relative improvement 10 No 38.6% 65.1% Table 1: Experimental Results. 5 Conclusion We have presented a probabilistic framework for denoising and dereverberation. The framework uses a strong speech model to perform Bayes-optimal signal estimation. The parameter estimation and the reconstruction of the signal are performed using a variational EM algorithm. Working in the frequency domain and using conjugate priors leads to great computational savings. The framework applies equally well to one-microphone and multiple-microphone cases. Experiments show that the optimal estimation can outperform standard methods such as spectral subtraction. Future directions include adding temporal dynamics to the speech model via an HMM structure, using a richer adaptive noise model (e.g. a mixture), and handling non-stationary noise and filters. References [1] H. Attias. Independent factor analysis. Neural Computation, 11(4):803-851,1999. [2] H. Attias. A variational bayesian framework for graphical models. In T. Leen, editor, Advances in Neural Information Processing Systems, volume 12, pages 209-215. MIT Press, 2000. [3] H. Attias, J. C. Platt, A. Acero, and L. Deng. Speech denoising and dereverberation using probabilistic models: Mathematical details. Technical Report MSR-TR-200102, Microsoft Research, 2001. http://research.microsoft.com/,,,hagaia. [4] M. S. Brandstein. On the use of explicit speech modeling in microphone array applications. In Proc. ICASSP, pages 3613-3616, 1998. [5] J.-F. Cardoso. Infomax and maximum likelihood for source separation. IEEE Signal Processing Letters, 4(4): 112-114, 1997. [6] A. Dembo and O. Zeitouni. Maximum a posteriori estimation of time-varying ARMA processes from noisy observations. IEEE Trans. Acoustics, Speech, and Signal Processing, 36(4):471--476, 1988. [7] L. Deng, A. Acero, M. Plumpe, and X. D. Huang. Large-vocabulary speech recognition under adverse acoustic environments. In Proceedings of the International Conference on Spoken Language Processing, volume 3, pages 806-809,2000. [8] Y. Ephraim. Statistical-model-based speech enhancement systems. Proc. IEEE, 80(10):1526--1555,1992. [9] J. C. Platt and F. Faggin. Networks for the separation of sources that are superimposed and delayed. In J. E. Moody, editor, Advances in Neural Information Processing Systems, volume 4, pages 730-737,1992. [10] L. K. Saul, T. Jaakkola, and M. I. Jordan. Mean field theory of sigmoid belief networks. 1. Artijiciallntelligence Research, 4:61-76,1996.
2000
56
1,857
Modelling spatial recall, mental imagery and neglect Suzanna Becker Department of Psychology McMaster University 1280 Main Street West Hamilton,Ont. Canada L8S 4Kl becker@mcmaster.ca Neil Burgess Department of Anatomy and Institute of Cognitive Neuroscience, UCL 17 Queen Square Abstract London, UK WCIN 3AR n.burgess@ucl.ac.uk We present a computational model of the neural mechanisms in the parietal and temporal lobes that support spatial navigation, recall of scenes and imagery of the products of recall. Long term representations are stored in the hippocampus, and are associated with local spatial and object-related features in the parahippocampal region. Viewer-centered representations are dynamically generated from long term memory in the parietal part of the model. The model thereby simulates recall and imagery of locations and objects in complex environments. After parietal damage, the model exhibits hemispatial neglect in mental imagery that rotates with the imagined perspective of the observer, as in the famous Milan Square experiment [1]. Our model makes novel predictions for the neural representations in the parahippocampal and parietal regions and for behavior in healthy volunteers and neuropsychological patients. 1 Introduction We perform spatial computations everday. Tasks such as reaching and navigating around visible obstacles are predominantly sensory-driven rather than memory-based, and presumably rely upon egocentric, or viewer-centered representations of space. These representations, and the ability to translate between them, have been accounted for in several computational models of the parietal cortex e.g. [2, 3]. In other situations such as route planning, recall and imagery for scenes or events one must also reply upon representations of spatial layouts from long-term memory. Neuropsychological and neuroimaging studies implicate both the parietal and hippocampal regions in such tasks [4, 5], with the long-term memory component associated with the hippocampus. The discovery of "place cells" in the hippocampus [6] provides evidence that hippocampal representations are ailocentric, in that absolute locations in open spaces are encoded irrespective of viewing direction. This paper addresses the nature and source of the spatial representations in the hippocampal and parietal regions, and how they interact during recall and navigation. We assume that in the hippocampus proper, long-term spatial memories are stored allocentrically, whereas in the parietal cortex view-based images are created on-the-fly during perception or recall. Intuitively it makes sense to use an allocentric representation for long-term storage as the position of the body will have changed before recall. Alternatively, to act on a spatial location (e.g. reach with the hand) or to imagine a scene, an egocentric representation (e.g. relative to the hand or retina) is more useful [7, 8]. A study of hemispatial neglect patients throws some light on the interaction of long-term memory with mental imagery. Bisiach and Luzatti [1] asked two patients to recall the buildings from the familiar Cathedral Square in Milan, after being asked to imagine (i) facing the cathedral, and (ii) facing in the opposite direction. Both patients, in both (i) and (ii), predominantly recalled buildings that would have appeared on their right from the specified viewpoint. Since the buildings recalled in (i) were located physically on the opposite side of the square to those recalled in (ii), the patients' long-term memory for all of the buildings in the square was apparently intact. Further, the area neglected rotated according to the patient's imagined viewpoint, suggesting that their impairment relates to the generation of egocentric mental images from a non-egocentric long-term store. The model also addresses how information about object identity is bound to locations in space in long-term memory, i.e. how the "what" and the "where" pathways interact. Object information from the ventral visual processing stream enters the hippocampal formation (medial entorhinal cortex) via the perirhinal cortex, while vi suo spatial information from the dorsal pathways enters lateral entorhinal cortex primarily via the parahippocampal cortex [9]. We extend the O'Keefe & Burgess [10] hippocampal model to include object-place associations by encoding object features in perirhinal cortex (we refer to these features as texture, but they could also be attributes such as colour, shape or size). Reciprocal connections to the parahippocampus allow object features to cue the hippocampus to activate a remembered location in an environment, and conversely, a remembered location can be used to reactivate the feature information of objects at that location. The connections from parietal to parahippocampal areas allow the remembered location to be specified in egocentric imagery. Post. parietal ego <·>allo /ransla/um Medial parietal egocentric locations "'-~~~-:."":f?:o"ft -----~:~--% t ~entre ~~gItt r D::~clar Parahpc. aUo. object locations QV~ Far ~-(~)---- __ (90) , ,w':~ N(270) AI/ocentric dir. - -: ~ -;-t/ • • • • 0000 -EE~----::;a. • .......... . Perirhinal objec//ex/ures Hippocampal formation au/o-assoc place rep. Figure 1: The model architecture. Note the allocentric encoding of direction (NSEW) in parahippocampus, and the egocentric encoding of directions (LR) in medial parietal cortex. 2 The model The model may be thought of in simple terms as follows. An allocentric representation of object location is extracted from the ventral visual stream in the parahippocampus, and feeds into the hippocampus. The dorsal visual stream provides an egocentric representation of object location in medial parietal areas and makes bi-directional contact with the parahippocampus via posterior parietal area 7a. Inputs carrying allocentric heading direction information [11] project to both parietal and parahippocampal regions, allowing bidirectional translation from allocentric to egocentric directions. Recurrent connections in the hippocampus allow recall from long-term memory via the parahippocampus, and egocentric imagery in the medial parietal areas. We now describe the model in more detail. 2.1 Hippocampal system The architecture of the model is shown in Figure 1. The hippocampal formation (HF) consists of several regions - the entorhinal cortex, dentate gyrus, CA3, and CAl, each of which appears to code for space with varying degrees of sparseness. To simplify, in our model the HF is represented by a single layer of "place cells", each tuned to random, fixed configurations of spatial features as in [10, 12]. Additionally, it learns to represent objects' textural features associated with a particular location in the environment. It receives these inputs from the parahippocampal cortex (PH) and perirhinal cortex (PR), respectively. The parahippocampal representation of object locations is simulated as a layer of neurons, each of which is tuned to respond whenever there is a landmark at a given distance and allocentric direction from the subject. Projections from this representation into the hippocampus drive the firing of place cells. This representation has been shown to account for the properties of place cells recorded across environments of varying shape and size [10, 12]. Recurrent connections between place cells allow subsequent pattern completion in the place cell layer. Return projections from the place cells to the parahippocampus allow reactivation of all landmark location information consistent with the current location. The perirhinal representation in our model consists of a layer of neurons, each tuned to a particular textural feature. This region is reciprocally connected with the hippocampal formation [13]. Thus, in our model, object features can be used to cue the hippocampal system to activate a remembered location in an environment, and conversely, a remembered location can activate all associated object textures. Further, each allocentric spatial feature unit in the parahippocampus projects to the perirhinal object feature units so that attention to one location can activate a particular object's features. 2.2 Parietal cortex Neurons responding to specific egocentric stimulus locations (e.g. relative to the eye, head or hand) have been recorded in several parietal areas. Tasks involving imagery of the products of retrieval tend to activate medial parietal areas (precuneus, posterior cingulate, retrosplenial cortex) in neuroimaging studies [14]. We hypothesize that there is a medial parietal egocentric map of space, coding for the locations of objects organised by distance and angle from the body midline. In this representation cells are tuned to respond to the presence of an object at a specific distance in a specific egocentric direction. Cells have also been reported in posterior parietal areas with egocentrically tuned responses that are modulated by variables such as eye position [15] or body orientation (in area 7a [16]). Such coding can allow translation of locations between reference frames [17, 2]. We hypothesize that area 7a performs the translation between allocentric and egocentric representations so that, as well as being driven directly by perception, the medial parietal egocentric map can be driven by recalled allocentric parahippocampal representations. We consider simply translation between allocentric and view-dependent representations, requiring a modulatory input from the head direction system. A more detailed model would include translations between allocentric and body, head and eye centered representations, and possibly use of retrosplenial areas to buffer these intermediate representations [18]. The translation between parahippocampal and parietal representations occurs via a hardwired mapping of each to an expanded set of egocentric representations, each modulated by head direction so that one is fully activated for each (coarse coded) head direction (see Figure 1). With activation from the appropriate head direction unit, activation from the parahippocampal or parietal representation can activate the appropriate cell in the other representation via this expanded representation. 2.3 Simulation details The hippocampal component of the model was trained on the spatial environment shown in the top-left panel of Figure 2, representing the buildings of the Milan square. We generated a series of views of the square, as would be seen from the locations in the central filled rectangular region of this figure panel. The weights were determined as follows, in order to form a continuous attractor (after [19, 20]). From each training location, each visible edge point contributed the following to the activation of each parahippocampal (PH) cell: 1 ( 9 i- 9j~2 1 (~i ~ i)2 L J27fUang2 e 2uang X V27fUdir(rj)2 e - 2Udi~(~j)2 (1) j where ()i and ri are the preferred object direction and distance of the ith PH cell, ()j and rj represent the location of the jth edge point relative to the observer, and U ang and U dir (r) are the corresponding standard deviations (as in [10]). Here, we used uang = pi/48 and Udir(r) = 2(r/l0)2. The HF place cells were preassigned to cover a grid of locations in the environment, with each cell's activation falling off as a Gaussian of the distance to its preferred location. The PH-HF and HF-PH connection strengths were set equal to the correlations between activations in the parahippocampal and hippocampal regions across all training locations, and similarly, the HF-HF weights were set to values proportional to a Gaussian of the distance between their preferred locations. The weights to the perirhinal (PR) object feature units - on the HF-to-PR and PH-to-PR connections - were trained by simulating sequential attention to each visible object, from each training location. Thus, a single object's textural features in the PR layer were associated with the corresponding PH location features and HF place cell activations via Hebbian learning. The PR-to-HF weights were trained to associate each training location with the single predominant texture - either that of a nearby object or that of the background. The connections to and within the parietal component of the model were hard-wired to implement the bidirectional allocentric-egocentric mappings (these are functionally equivalent to a rotation by adding or subtracting the heading angle). The 2-layer parietal circuit in Figure 1 essentially encodes separate transformation matrices for each of a discrete set of head directions in the first layer. A right parietal lesion causing left neglect was simulated with graded, random knockout to units in the egocentric map of the left side of space. This could have equally been made to the trasnlation units projecting to them (i.e. those in the top rows of the PP in Figure 1). After pretraining the model, we performed two sets of simulations. In simulation 1, the model was required to recall the allocentric representation of the Milan square after being cued with the texture and direction (()j) of each of the visible buildings in turn, at a short distance rj. The initial input to the HF, [HF (t = 0), was the sum of an externally provided texture cue from the PR cell layer, and a distance and direction cue from the PH cell layer obtained by initializing the PH states using equation 1, with rj = 2. A place was then recalled by repeatedly updating the HF cells' states until convergence according to: IHF (t) = .25IHF (t - 1) + .75 (WHF- HF AHF (t - 1) + [HF (0)) (2) AfF(t) = exp(IfF(t))/Lexp(IfF(t)) (3) k .9IPH(t -1) + .1WHF- PH AHF(t) (4) Fin.ally, the HF Jlace cell. activity was used to perf~r,? patte:n .completi.on in the. PH la~er (USIng the wH -PH weIghts), to recall the other vIsIble buIldIng locatIons. In sImulatIOn 2 the model was then required to generate view-based mental images of the Milan square from various viewpoints according to a specified heading direction. First, the PH cells and HF place cells were initialized to the states of the retrieved spatial location (obtained after settling in simulation 1). The model was then asked what it "saw" in various directions by simulating focused attention on the egocentric map, and requiring the model to retrieve the object texture at that location via activation of the PR region. The egocentric medial parietal (MP) activation was calculated from the PH-to-MP mapping, as described above. Attention to a queried egocentric direction was simulated by modulating the pattern of activation across the MP layer with a Gaussian filter centered on that location. This activation was then mapped back to the PH layer, and in turn projected to the PR layer via the PH-to-PR connections: IPR W HC- PR AHF + W PH- PR APH AfR = exp(IfR)/ L exp(IfR) k 2.4 Results and discussion (5) (6) In simulation 1, when cued with the textures of each of the 5 buildings around the training region, the model settled on an appropriate place cell activation. One such example is shown in Figure 2, upper panel. The model was cued with the texture of the cathedral front, and settled to a place representation near to its southwest corner. The resulting PH layer activations show correct recall of the locations of the other landmarks around the square. In simulation 2, shown in the lower panel, the model rotated the PH map according to the cued heading direction, and was able to retrieve correctly the texture of each building when queried with its egocentric direction. In the lesioned model, buildings to the egocentric left were usually not identified correctly. One such example is shown in Figure 2. The heading direction is to the south, so building 6 is represented at the top (egocentric forward) of the map. The building to the left has texture 5, and the building to the right has texture 7. After a simulated parietal lesion, the model neglects building 5. 3 Predictions and future directions We have demonstrated how egocentric spatial representations may be formed from allocentric ones and vice versa. How might these representations and the mapping between them be learned? The entorhinal cortex (EC) is the major cortical input zone to the hippocampus, and both the parahippocampal and perirhinal regions project to it [13]. Single cell recordings in EC indicate tuning curves that are broadly similar to those of place cells, but are much more coarsely tuned and less specific to individual episodes [21, 9]. Additionally, EC cells can hold state information, such as a spatial location or object identity, over long time delays and even across intervening items [9]. An allocentric representation could emerge if the EC is under pressure to use a more compressed, temporally stable code to reconstruct the rapidly changing visuospatial input. An egocentric map is altered dramatically after changes in viewpoint, whereas an allocentric map is not. Thus, the PH and hippocampal representations could evolve via an unsupervised learning procedure that discovers a temporally stable, generative model of the parietal input. The inverse mapping from allocentric PH features to egocentric parietal features could be learned by training the back-projections similarly. But how could the egocentric map in the parietal region be learned in the first place? In a manner analagous to that suggested by Abbott [22], a "hidden layer" trained by Hebbian learning could develop egocentric features in learning a mapping from a sensory layer representing retinally located targets and arbitrary heading directions to a motor layer representing randomly explored (whole-body) movement directions. We note that our parietal imagery system might also support the short-term visuospatial working memory required in more perceptual tasks (e.g. line cancellation)[2]. Thus lesions here would produce the commonly observed pattern of combined perceptual and representational neglect. However, the difference in the routes by which perceptual and reconstructed information would enter this system, and possibly in how they are manipulated, allow for patients showing only one form of neglect[23]. So far our simulations have involved a single spatial environment. Place cells recorded from the same rat placed in two similar novel environments show highly similar firing fields [10, 24], whereas after further exposure, distinctive responses emerge (e.g., [25, 26, 24] and unpublished data). In our model, sparse random connections from the object layer to the place layer ensure a high degree of initial place-tuning that should generalize across similar environments. Plasticity in the HF-PR connections will allow unique textures of walls, buildings etc to be associated with particular places; thus after extensive exposure, environment-specific place firing patterns should emerge. A selective lesion to the parahippocampus should abolish the ability to make allocentric object-place associations altogether, thereby severely disrupting both landmark-based and memory-based navigation. In contrast, a pure hippocampal lesion would spare the ability to represent a single object's distance and allocentric directions from a location, so navigation based on a single landmark should be spared. If an arrangement of objects is viewed in a 3-D environment, the recall or recognition of the arrangement from a new viewpoint will be facilitated by having formed an allocentric representation of their locations. Thus we would predict that damage to the hippocampus would impair performance on this aspect of the task, while memory for the individual objects would be unimpaired. Similarly, we would expect a viewpoint-dependent effect in hemispatial neglect patients. Schematized Milan Square HR act given texture=1 MP act + query dir PR activations - Control f':L 00 o 0 -~ . . ~ . o 5 10 Texture neuron PH act + head dir . I · ~ .-MP activns with neglect PR activations - Lesioned 0.: 1 O~ o 5 10 Texture neuron Figure 2: I. Top panel. Left: training locations in the Milan square are plotted in the black rectangle. Middle: HF place cell activations, after being cued that building #1 is nearby and to the north. Place cells are arranged in a polar coordinate grid according to the distance and direction of their preferred locations relative to the centre of the environment (bright white spot). The white blurry spot below and at the left end of building #1 is the maximally activated location. Edge points of buildings used during training are also shown here. Right: PH inputs to place cell layer are plotted in polar coordinates, representing the recalled distances and directions of visible edges associated with the maximally activated location. The externally cued heading direction is also shown here. II. Bottom panel. Left: An imagined view in the egocentric map layer (MP), given that the heading direction is south; the visible edges shown above have been rotated by 180 degrees. Mid-left: the recalled texture features in the PR layer are plotted in two different conditions, simulating attention to the right (circles) and left (stars). Mid-right and right: Similarly, the MP and PR activations are shown after damage to the left side of the egocentric map. One of the many curiosities of the hemispatial neglect syndrome is the temporary amelioration of spatial neglect after left-sided vestibular stimulation (placement of cold water into the ear) and transcutaneous mechanical vibration (for a review, see [27]), which presumably affects the perceived head orientation. If the stimulus is evoking erroneous vestibular or somatosensory inputs to shift the perceived head direction system leftward, then all objects will now be mapped further rightward in egocentric space and into the 'good side' of the parietal map in a lesioned model. The model predicts that this effect will also be observed in imagery, as is consistent with a recent result [28]. Acknowledgments We thank Allen Cheung for extensive pilot simulations and John O'Keefe for useful discussions. NB is a Royal Society University Research Fellow. This work was supported by research grants from NSERC, Canada to S.B. and from the MRC, GB to N.B. References [1] E. Bisiach and C. Luzzatti. Cortex, 14:129- 133, 1978. [2] A. Pouget and T. J. Sejnowski. 1. Cog. Neuro., 9(2):222- 237, 1997. [3] E. Salinas and L.F. Abbott. 1. Neurosci., 15:6461-6474, 1995. [4] E.A. Maguire, N. Burgess, J.G. Donnett, R. S.J. Frackowiak, e.D. Frith, and J. O'Keefe. Science, 280:921- 924, May 8 1998. [5] N. Burgess, H. Spiers, E. Maguire, S. Baxendale, F. Vargha-Khadem, and J. O'Keefe. Subm. [6] J. O'Keefe. Exp. Neurol., 51:78- 109,1976. [7] N. Burgess, K Jeffery, and J. O'Keefe. In KJ. Jeffery N. Burgess and J. O'Keefe, editors, The hippocampal andparietalfoundations of5patial cognition. Oxford U. Press, 1999. [8] A.D. Milner, H.e. Dijkerman, and D.P. Carey. In KJ. Jeffery N. Burgess and J. O'Keefe, editors, The hippocampal and parietal foundations of spatial cognition. Oxford U. Press, 1999. [9] w,A. Suzuki, E.K Miller, and R. Desimone. 1. Neurosci., 78:1062- 1081, 1997. [10] J. O'Keefe and N. Burgess. Nature, 381:425--428, 1996. [11] J.S. Taube. Prog. Neurobiol. , 55:225- 256, 1998. [12] T. Hartley, N. Burgess, e. Lever, F. Cacucd, and J. O'Keefe. Hippocampus, 10:369- 379,2000. [13] w,A. Suzuki and D.G. Amaral. 1. Neurosci., 14:1856--1877, 1994. [14] P.e. Fletcher, C.D. Frith, S.C. Baker, T. Shallice, R.S.J. Frackowiak, and R.J. Dolan. Neuroimage, 2(3):195-200, 1995. [15] R.A. Andersen, G.K Essick, and R.M. Siegel. Science, 230(4724):456--458, 1985. [16] L.H. Snyder, A.P. Batista, and R.A. Andersen. Nature, 386:167- 170, 1997. [17] D. Zipser and R. A. Andersen. Nature, 331:679- 684, 1988. [18] N. Burgess, E. Maguire, H. Spiers, and 1. O'Keefe. Submitted. [19] A. Samsonovich and B.L. McNaughton. 1. Neurosci., 17:5900--5920, 1997. [20] S. Deneve, P.E. Latham, and A. Pouget. Nature Neuroscience, 2(8):740--745, 1999. [21] GJ. Quirk, R.U. Muller, J.L. Kubie, and J.B. Ranck. I Neurosci, 12:1945- 1963, 1992. [22] L.F.Abbott. Int. 1. ofNeur. Sys., 6:115- 122,1995. [23] C. Guariglia, A. Padovani, P. Pantano, and L. Pizzamiglio. Nature, 364:235-7,1993. [24] C. Lever, F. Cacucd, N. Burgess, and J. O'Keefe. In Soc. Neurosci. Abs., vol. 24., 1999. [25] E. Bostock, R.U. Muller, andJ.L. Kubie. Hippo. , 1:193-205, 1991. [26] R.U. Muller and J.L. Kubie. 1. Neurosci, 7:1951-1968, 1987. [27] G. Vallar. In KJ. Jeffery N. Burgess and J. O'Keefe, editors, The hippocampal and parietal foundations of spatial cognition. Oxford U. Press, 1999. [28] C. Guariglia, G. Lippolis, and L. Pizzamiglio. Cortex, 34(2):233-241, 1998.
2000
57
1,858
Adaptive Object Representation with Hierarchically-Distributed Memory Sites Bosco S. Tjan Department of Psychology University of Southern California btjan@usc.edu Abstract Theories of object recognition often assume that only one representation scheme is used within one visual-processing pathway. Versatility of the visual system comes from having multiple visual-processing pathways, each specialized in a different category of objects. We propose a theoretically simpler alternative, capable of explaining the same set of data and more. A single primary visual-processing pathway, loosely modular, is assumed. Memory modules are attached to sites along this pathway. Object-identity decision is made independently at each site. A site's response time is a monotonic-decreasing function of its confidence regarding its decision. An observer's response is the first-arriving response from any site. The effective representation(s) of such a system, determined empirically, can appear to be specialized for different tasks and stimuli, consistent with recent clinical and functional-imaging findings. This, however, merely reflects a decision being made at its appropriate level of abstraction. The system itself is intrinsically flexible and adaptive. 1 Introduction How does the visual system represent its knowledge about objects so as to identify them? A largely unquestioned assumption in the study of object recognition has been that the visual system builds up a representation for an object by sequentially transforming an input image into progressively more abstract representations. The final representation is taken to be the representation of an object and is entered into memory. Recognition of an object occurs when the representation of the object currently in view matches an item in memory. Highly influential proposals for a common representation of objects [1, 2] have failed to show promise of either producing a working artificial system or explaining a gamut of behavioral data. This insistence of having a common representation for all objects is also a major cause of the debate on whether the perceptual representation of objects is 2-D appearance-based or 3-D structure-based [3,4]. Recently, a convergence of data [5-9], including those from the viewpoint debate itself [10, 11], have been used to suggest that the brain may use multiple mechanisms or processing pathways to recognize a multitude of objects. While insisting on a common representation for all objects seems too restrictive in light of the varying complexity across objects [12], asserting a new pathway for every idiosyncratic data clusters seems unnecessary. We propose a parsimonious alternative, which is consistent with existing data but explains them with novel insights. Our framework relies on a single processing pathway. Flexibility and self-adaptivity are achieved by having multiple memory and decision sites distributed along the pathway. 2 Theory and Methods If the visual system needs to construct an abstract representation of objects for a certain task (e.g. object categorization), it will have to do so via multiple stages. The intermediate result at each stage is itself a representation. The entire processing pathway thus provides a hierarchy of representations, ranging from the most imagespecific at the earliest stage to the most abstract at the latest stage. The central idea of our proposal is that the visual system can tap this hierarchical collection of representations by attaching memory modules along the processing pathway. We further speculate that each memory site makes independent decisions about the identity of an incoming image. Each announces its decision after a delay, determined by an amount related to the site's confidence about its own decision and the amount of memory it needs to consult before reaching the decision. The homunculus does nothing but takes the first-arriving response as the system's response. Figure la depicts this framework, which we shall call the Hierarchically Distributed Decision Theory for object recognition. primary visual processing Sensory Memory * ~ ... Independent + L, Decisions * ~ + L, • * ~ + L. Delays ~ Y y Homunculus' Response the first-arriving respon se (a) (b) Figure 1: An illustration of the Hierarchically Distributed Decision Theory of object recognition (a) and its implementation in a toy visual system (b). 2.1 A toy visual system We constructed a toy visual system to illustrate various properties of the Hierarchically Distributed Decision Theory. The task for this toy system is to identify letters presented at arbitrary position and orientation and corrupted by Gaussian luminance noise. This system is not meant to be a model of human vision, but rather a demonstration of the theory. Given a stimulus (letter+noise), the position of the target letter is first estimated and centered in the image (position normalization) by computing the centroid of the stimulus' luminance profile. Once centered, the principal axis of the luminance profile is determined and the entire image is rotated so that this axis is vertical (orientation normalization). The representation at this final stage is both positionand orientation-invariant. Traditionally, one would commit only this final representation to memory. In contrast, the Hierarchically Distributed Decision Theory stated that the intermediate results are also committed to some form of sensory memory (Figure Ib). A memory item is a feature vector. For this toy system, a feature vector is a sub-sampled image at the output of each stage. To recognize a letter, each site .I' independently decides the letter's identity L" based on the immediate representation Is available to the site. It does so by maximizing the posterior probability Pr(LsIIs), assuming 1) independent feature noise of known distribution (in this case, independent Gaussian luminance noise of zero mean and standard deviation cr) and 2) that its memory content completely captures all other sources of correlated noise and signal uncertainties (deviation from which is assessed by Eq. 3). Specifically, L, = arg max Pr(r II, ) re Letters (1) where Letters is the set of letter identities. A letter identity r is in turn a set of letter images Vat a given luminance, which may be shifted or rotated. So we have, Pr(rl/,) = ~pr(V II,) = ~pr(l, I V) Pr(V) /pr(l,) (2) = Eexp(-II/, - 2VI12]pr(V)/ E Eexp(-III, - 2VI12]pr(V) Ve r 2s reLetters VEr 2s In addition to choosing a response, each site delays sending out its response by an amount 1s. 1s is related to each site's own assessment of its confidence about its decision and the size of memory it needed to consult to make the decision. 1s is a monotonically decreasing function of confidence (one minus the maximum posterior probability) and a monotonically increasing function of memory size: (3) 1s = ~ 1- max Pr(rl/J +~ 10g(MJ+ho reLLtters ho, hj, and h2' are constants common to all sites. Ms is the effective number of items in memory at site .1', equal to the number of distinct training views the site saw (or the limit of its memory size, whichever is less). In our toy system, M/ is the number of distinct training views presented to the system. M2 is approximately the number of training views with distinct orientations (because h is normalized by position), and M3 is effectively one view per letter. In general, M/ > M2 > M3. Relative to the decision time 1" the processing time required to perform normalizations is assumed to be negligible (This assumption can be removed by letting ho depend on site .1'.) 2.2 Learning and testing The learning component of the theory has yet to be determined. For our toy system, we assumed that the items kept in memory are free of luminance noise but subjected to normalization errors caused by the luminance noise (e.g. the position of a letter may not be perfectly determined). We measured performance of the toy system by first exposing it to 5 orientations and 20 positions of each letter at high signal-to-noise ratio (SNR). Ten letters from the Times Roman font were used in the simulation (bcdeghnopw). The system keeps in memory those studied views (Site 1) and their normalized versions (Sites 2 & 3). Therefore, M/ = 5x20xlO = 1000. Since the normalization processes are reliable at high SNR, M2 "" 50, and M3 "" 10. We tested the system by presenting it with letters from either the studied views, or views it had not seen before. In the latter case, a novel view could be either with novel position alone, or with both novel position and orientation. The test stimuli were presented at SNR ranging from 210 to 1800 (Weber contrast of 10-30% at mean luminance of 48 cd/m2 and a noise standard deviation of 10 cd/m2). 3 Results and Discussions Figure 2a shows the performance of our toy visual system under different stimulus conditions. The numbered thin curves indicate recognition accuracy achieved by each site. As expected, Site 1, which kept raw images in memory, achieved the best accuracy when tested with studied views, but it could not generalize to novel views. In contrast, Site 3 maintained essentially the same level of performance regardless of view condition - its representation was invariant to position and orientation. Familiar views Novel positions 2,3 Contrast (%) (a) Novel positions & orientations i ~.~ "''' ~ ..... .. , raw'''.ge High contrast (25%) Low contnlst (15%) Familiar views 50 100 Novel positions 50 100 % Flrst-arnvlng Response (b) Novel positions & orientations 50 100 Figure 2: (a) Accuracy of the system (solid symbols) verses accuracy of each site (numbered curves) under different contrast and view conditions. (b) Relative frequency of a site issuing the first-arriving response. The thick curves with solid symbols indicated the system's performance based on first-arriving responses. Clearly, it tracked the performance of the best-performing site under all conditions. Under the novel-position condition, the system's performance was even better than the best-performing sites. This is because although Site 2 and 3 performed equally well, they made different errors. The simple delay rule effectively picked out the most reliable response at each trial. Figure 2b shows the source distribution of the first-arriving responses. When familiar (i.e. studied) views were presented at low contrast (low SNR), Site 1, which used raw image as the representation, was responsible for issuing about 60% of the first-arriving responses. This is because normalization processes tend to be less reliable at low SNR. Whenever an input to Site 2 or 3 cannot be properly normalized, it will match poorly to the normalized views in memory, resulting in lower confidence and longer delay. As contrast increased, normalization processes became more accurate, and the first-arriving responses shifted to the higher sites. Higher sites encode more invariance, and thus need to consult fewer memory items. Lastly, when novel views were presented, Site 3 tended to be the most active, since it was the only site that fully captured all the invariance necessary for this condition. The delay mechanism specified by Eq. 3 allows the system as a whole to be selfadaptive. Its effective representation, if we can speak of such, is flexible. No site is exclusively responsible for any particular kind of stimuli. Instead, the decision is always distributed across sites in a trial-by-trial basis. What do existing human data on object recognition have to say about this simple framework? Wouldn't those data supporting functional specialization or objectcategory-specific representations argue against this framework? Not at all! 3.1 Viewpoint effects Entry-level object recognition [13] often shows less viewpoint dependence than subordinate-level object recognition. This has been taken to suggest that two different mechanisms or forms of representation may be subserving these two types of object recognition tasks [4]. Figure 3a shows our system's overall performance in response time (RT) and error rate when tested with the studied (thus "familiar") and the novel (new positions and orientations) views. The difference in RT and error rate between these two conditions (Figure 3b) is a rough measure of the viewpoint effect. Even though the system includes a site (Site 3) with viewpoint-invariant representation, the system's overall performance still depends on viewpoint, particularly at low contrast. ~ BOO 0 \ 0> l UI 700 -S 0> E F 500 0> UI .00 " 0 c. UI 0> a:: .l!! ~ 06 f' W 04 Contrast ('!o) (a) -Familiar ... \ - Novel pDS. & orl. ~ § U 0> ~ .,§.150 ta; a:: > <:)100 0 Z view-deperdent .t, 0> ~O4 I ~ .-I' I I f' . \ W I \ I <l \ I \ view-invariant \ \.(b) Figure 3: (a) RT and error rate of the toy system when tested with either the studied or novel views. (b) Difference between the two conditions. Because the representation space of this toy system is the image space, contrast is a direct measure of "perceptual" distinctiveness. Figure 3b shows that when objects were sufficiently distinct (as in entry-level recognition), there was little or no viewpoint effect. When objects were highly similar, performances were equally poor for studied and novel views, so there was little viewpoint effect to speak of. Viewpoint effect was localized to a mid-range of distinctiveness. Within this range, increasing similarity increased viewpoint dependence. The fact that viewpoint effect was present only within a bounded range of distinctiveness agrees with the general experience that sizable viewpoint effect is uncommon unless artificiallycreated objects or objects chosen from the same category (subordinate-level recognition) are used. 3.2 Functionally specialized brain regions Various fMRI studies have observed what appears to be functionally specialized brain regions involved in object perception [7-9]. To identify and localize such areas, a typical approach is to subtract the observed hemodynamic signals under one stimulus condition from that under a different condition. An area is said to be "selective" to a stimulus type X if its signal strength is higher whenever X, as opposed to some other type of stimuli, is displayed. We performed a simulated "imaging" on our toy visual system. Consider Figure 3b. If we assume that one unit of metabolic energy is needed to send a response, and no more response will be sent after the first-arriving response has been received, we can re-Iabel the x-axis of the histograms as "hemodynamic signal", or "activation level". Furthermore, as mentioned before, we can label stimuli in high contrast as "distinct objects" and those in low contrast as "similar objects." When we did "similar minus distinct", we obtained the result shown in the lower right-hand panel in Figure 4a. Site 1 was more active than all other sites when recognition was between similar objects, while Site 3 was more active when recognition was between distinct objects. The standard practice of interpreting such a result would label Site 1 as an area for processing similar (perhaps subordinatelevel) objects, and Site 3 as an area for processing distinct (perhaps entry-level) objects. Knowing how the decisions are actually made however, such labeling is clearly misguided. When instead we did "familiar minus novel", we obtained a similar pattern of result (Figure 4a, upper right). However, this time we would have to label Site 1 as an area for processing familiar objects (or an area for expertise), and Site 3 for novel objects. Analogous to an on-going debate about expertise vs. object-specificity [14], whether Site 1 is for familiar objects or similar objects cannot be resolved based on subtraction method alone. According to the standard interpretation of the subtraction method, our toy visual system appeared to contain functionally specialized sites; yet, none of the sites were designed to specialize in any kind of stimuli. Even in the most extreme cases, no site was responsible for more than 70% of the decisions. One last point is worth mentioning. The primary visual pathway was equally active under all conditions, so its activity became invisible after subtraction. The observed signal change revealed only the difference in memory activities. Fam. m .. UJ minus 3 nfJE:J~ ~~ n ~ U =: :'00 ·0· '00 o 50 100 0 50 100 Hemodynamic signal %signal change (a) u ~ 08 8 c: o "€ 0 4 &. e 02 0.. • Nonnll • Las lon OA Object Type (b) Figure 4: The toy visual system gives the appearance of contalOlOg functionally specialized modules in simulated functional imaging (a) and lesion studies (b). 3.3 Category-specific deficits Patients with prosopagnosia cannot recognize faces, but their ability for recognizing other objects are often spared. Patients with visual object agnosia have the opposite impairments. This kind of double dissociation is taken as another evidence to suggest that the visual system contains object-specific modules (cf. [15]). We observed the same kind of double dissociation with our toy model. Figure 4b shows what happened when we "lesioned" different memory sites in our system by preventing a site from issuing any response. When Site 1 was lesioned, recognition performance for similar-but-familiar objects (analog to familiar faces) was impeded while performance for distinct-but-novel objects was spared. The opposite was true when Site 3 was lesioned. It is worth restating that our toy system consisted of only a single processing pathway and no category-specific representations. 4 Conclusion Intermediate representations along a single visual-processing pathway form a natural hierarchy of abstractions. We have shown that by attaching sensory memory modules to the pathway, this hierarchy can be exploited to achieve an effective representation of objects that is highly flexible and adaptive. Each memory module makes independent decision regarding the identity of an object based on the intermediate representation available to it. Each module delays sending out its response by an amount related to its confidence about its decision, in addition to the time required for memory lookup. The first-arriving response becomes the system's response. It is an attractive conjecture that this scheme of adaptive representation may be used by the visual system. Through a toy example, we have shown that such a system can appear to behave like one with multiple functionally specialized pathways or category-specific representations, raIsmg questions for the contemporary interpretations of behavioral, clinical and functional-imaging data regarding the neuro-architecture for object recognition. References 1. Marr, D., Vision. 1982, San Francisco: Freeman. 2. Biederman, I., Recognition-by-components: A theory of human image understanding. Psychological Review, 1987.94: p. 115-147. 3. Biederman, I. and P.C. Gerhardstein, Recognizing depth-rotated objects: Evidence and conditions for three-dimensional viewpoint invariance. Journal of Experimental Psychology: Human Perception and Performance, 1993. 19: p. 1162-1182. 4. Tarr, M.J. and H.H. Btilthoff, Is human object recognition better described by geon structural descriptions or by multiple views ? Comment on Biederman and Gerhardstein (1993). Journal of Experimental Psychology: Human Perception and Performance, 1995.21: p. 1494-1505. 5. Farah, M.J., Is an object an object an object? Cognitive and neuropsychological investigations of domain-specificity in visual object recognition. Current Directions in Psychological Science, 1992. 1: p. 164-169. 6. Kanwisher, N., M.M. Chun, and P. ledden, Functional imaging of human visual recognition. Cognitive Brain Research, 1996. 5: p. 55-67. 7. Kanwisher, N., J. McDermott, and M.M. Chun, The fusiform face area: A module in human extra-striate cortex specialized for face perception. Journal of Neuroscience, 1997. 17: p. 1-10. 8. Kanwisher, N., et aI., A locus in human extrastriate cortex for visual shape analysis. Journal of Cognitive Neuroscience, 1997. 9: p. 133-142. 9. Ishai, A., et aI., fMRI reveals differential activation in the ventral object recognition pathway during the perception of faces, hourses and chairs. Neuroimage, 1997.5(149). 10. Edelman, S., Features of Recognition. 1991, Rehovot, Isreal: Weizmann Institute of Science. 11. Jolicoeur, P., Identification of disoriented objects: A dual-system theory. Memory & Cognition, 1990. 13: p. 289-303. 12. Tjan, B.S. and G.E. Legge, The viewpoint complexity of an object recognition task. Vision Research, 1998. 38: p. 2335-50. 13. Jolicoeur, P., M.A. Gluck, and S.M. Kosslyn, From pictures to words: Making the connection. Cognitive Psychology, 1984. 16: p. 243-275. 14. Gauthier, I., et al., Activation of the middle fusiform "face area" increases with expertise in recognizing novel objects. Nature Neuroscience, 1999. 2(6): p. 568573. 15. Farah, M.J., Visual Agnosia: Disorders of Object Recognition and What They Tell Us about Normal Vision. 1990, Cambridge, MA: MIT Press.
2000
58
1,859
Text Classification using String Kernels HUlna Lodhi John Shawe-Taylor N ello Cristianini Chris Watkins Department of Computer Science Royal Holloway, University of London Egham, Surrey TW20 OEX, UK {huma, john, nello, chrisw}Cdcs.rhbnc.ac.uk Abstract We introduce a novel kernel for comparing two text documents. The kernel is an inner product in the feature space consisting of all subsequences of length k. A subsequence is any ordered sequence of k characters occurring in the text though not necessarily contiguously. The subsequences are weighted by an exponentially decaying factor of their full length in the text, hence emphasising those occurrences which are close to contiguous. A direct computation of this feature vector would involve a prohibitive amount of computation even for modest values of k, since the dimension of the feature space grows exponentially with k. The paper describes how despite this fact the inner product can be efficiently evaluated by a dynamic programming technique. A preliminary experimental comparison of the performance of the kernel compared with a standard word feature space kernel [6] is made showing encouraging results. 1 Introduction Standard learning systems (like neural networks or decision trees) operate on input data after they have been transformed into feature vectors XI, ••• , Xl E X from an n dimensional space. There are cases, however, where the input data can not be readily described by explicit feature vectors: for example biosequences, images, graphs and text documents. For such datasets, the construction of a feature extraction module can be as complex and expensive as solving the entire problem. An effective alternative to explicit feature extraction is provided by kernel methods. Kernel-based learning methods use an implicit mapping ofthe input data into a high dimensional feature space defined by a kernel function, i.e. a function returning the inner product between the images of two data points in the feature space. The learning then takes place in the feature space, provided the learning algorithm can be entirely rewritten so that the data points only appear inside dot products with other data points. Several linear algorithms can be formulated in this way, for clustering, classification and regression. The most typical example of kernel-based systems is the Support Vector Machine (SVM) [10, 3], that implements linear classification. One interesting property of kernel-based systems is that, once a valid kernel function has been selected, one can practically work in spaces of any dimensionality without paying any computational cost, since the feature mapping is never effectively performed. In fact, one does not even need to know what features are being used. In this paper we examine the use of a kernel method based on string alignment for text categorization problems. A standard approach [5] to text categorisation makes use of the so-called bag of words (BOW) representation, mapping a document to a bag (i.e. a set that counts repeated elements), hence losing all the word order information and only retaining the frequency of the terms in the document. This is usually accompanied by the removal of non-informative words (stop words) and by the replacing of words by their stems, so losing inflection information. This simple technique has recently been used very successfully in supervised learning tasks with Support Vector Machines (SVM) [5]. In this paper we propose a radically different approach, that considers documents simply as symbol sequences, and makes use of specific kernels. The approach is entirely subsymbolic, in the sense that it considers the document just like a unique long sequence, and still it is capable to capture topic information. We build on recent advances [11, 4] that demonstrated how to build kernels over general structures like sequences. The most remarkable property of such methods is that they map documents to vectors without explicitly representing them, by means of sequence alignment techniques. A dynamic programming technique makes the computation of the kernels very efficient (linear in the documents length). It is surprising that such a radical strategy, only extracting allignment information, delivers positive results in topic classification, comparable with the performance of problem-specific strategies: it seems that in some sense the semantic of the document can be at least partly captured by the presence of certain substrings of symbols. Support Vector Machines [3] are linear classifiers in a kernel defined feature space. The kernel is a function which returns the dot product of the feature vectors ¢(x) and ¢(X') of two inputs x and x' K(x, x') = ¢(x)T ¢(X'). Choosing very high dimensional feature spaces ensures that the required functionality can be obtained using linear classifiers. The computational difficulties of working in such feature spaces is avoided by using a dual representation of the linear functions in terms of the training set S = {(Xl, Y1) ,(X2, Y2), ... , (xm, Ym)}, m f(x) = LCkiYiK(X, Xi) - b. ;=1 The danger of overfitting by resorting to such a high dimensional space is averted by maximising the margin or a related soft version of this criterion, a strategy that has been shown to ensure good generalisation despite the high dimensionality [9,8]. 2 A Kernel for Text Sequences In this section we describe a kernel between two text documents. The idea is to compare them by means of the substrings they contain: the more substrings in common, the more similar they are. An important part is that such substrings do not need to be contiguous, and the degree of contiguity of one such substring in a document determines how much weight it will have in the comparison. For example: the substring 'c-a-r' is present both in the word 'card' and in the word 'custard', but with different weighting. For each such substring there is a dimension of the feature space, and the value of such coordinate depends on how frequently and how compactly such string is embedded in the text. In order to deal with non-contiguous substrings, it is necessary to introduce a decay factor). E (0,1) that can be used to weight the presence of a certain feature in a text (see Definition 1 for more details). EXaIllple. Consider the words cat, car, bat, bar. If we consider only k = 2, we obtain an 8-dimensional feature space, where the words are mapped as follows: c-a c-t a-t b-a b-t c-r a-r b-r rp(cat) ).2 ).3 ).2 0 0 0 0 0 rp(car) ).2 0 0 0 0 ).3 ).2 0 rp(bat) 0 0 ).2 ).2 ).3 0 0 0 rp(bar) 0 0 0 ).2 0 0 ).2 ).3 Hence, the unnormalized kernel between car and cat is K(car,cat) = ).4, wherease the normalized version is obtained as follows: K(car,car) = K(cat,cat) = 2).4 +).6 and hence IC(car,cat) = ).4/(2).4 + ).6) = 1/(2 + ).2). Note that in general the document will contain more than one word, but the mapping for the whole document is into one feature space. Punctuation is ignored, but spaces are retained. However, for interesting substring sizes (eg > 4) direct computation of all the relevant features would be impractical even for moderately sized texts and hence explicit use of such representation would be impossible. But it turns out that a kernel using such features can be defined and calculated in a very efficient way by using dynamic progamming techniques. We derive the kernel by starting from the features and working out their inner product. In this case there is no need to prove that it satisfies Mercer's conditions (symmetry and positive semi-definiteness) since they will follow automatically from its definition as an inner product. This kernel is based on work [11, 4] mostly motivated by bioinformatics applications. It maps strings to a feature vector indexed by all k tuples of characters. A k-tuple will have a non-zero entry if it occurs as a subsequence anywhere (not necessarily contiguously) in the string. The weighting of the feature will be the sum over the occurrences of the k-tuple of a decaying factor of the length of the occurrence. Definition 1 (String subsequence kernel) Let ~ be a finite alphabet. A string is a finite sequence of characters from~, including the empty sequence. For strings s, t, we denote by I s I the length of the string s = Sl .•. sl s I, and by st the string obtained by concatenating the strings sand t. The string sri : j] is the substring Si ••• Sj of s. We say that u is a subsequence of s, if there exist indices i = (i l , ... ,ilul)' with 1 :S i l < ... < i 1ul :S lsi, such that Uj = S i j' for j = 1, ... ,lui, 01' u = sri] for short. The length l(i) of the subsequence in s is ilul - i l + 1. We denote by ~n the set of all finite strings of length n, and by~· the set of all strings DO (1) We now define feature spaces Fn = lR 1: n • The feature mapping rp for a string s is given by defining the u coordinate rpu (s) for each u E ~n. We define rpu(s) = L ).l(i) , (2) i:u= s[il for some ..\ < 1. These features measure the number of occurrences of subsequences in the string-s weighting them according to their lengths. Hence, the inner product of the feature vectors for two strings sand t give a sum over all common subsequences weighted according to their frequency of occurrence and lengths L (<I>u(s) . <l>u(t)) = L L ..\l(i) L ..\l(j) uEEn uEEn i:u=s [iJ j :u= t liJ L L L ..\l(i)+l(j). uEEn i:u= s [iJ j :u=tliJ In order to derive an effective procedure for computing such kernel, we introduce an additional function which will aid in defining a recursive computation for this kernel. Let KHs, t) L L L ..\l s l+ltl- i ,-j,+2, uEE' i:u = s[iJj:u= t liJ 1, ... , n -1, that is counting the length to the end of the strings sand t instead of just l(i) and l(j). We can now define a recursive computation for K: and hence compute K n , Definition 2 Recursive computation of the subsequence kernel. Kb(s, t) 1, for all s, t, K:(s,t) 0, if min(lsl, It l) < i, K;(s,t) 0, if min(lsl, It l) < i, KHsx, t) ..\K:(s, t) + L KL1(S, t[l : j - 1])..\ltl-1+2, j :tj=X i = 1, .. . ,n - 1, Kn(s,t)+ L K~_1(s,t[1:j-1])..\2. j:tj = x The correctness of this recursion follows from observing how the length of the strings has increased, incurring a factor of ..\ for each extra character, until the full length of n characters has been attained. If we wished to compute Kn(s, t) for a range of values of n, we would simply perform the computation of K:(s, t) up to one less than the largest n required, and then apply the last recursion for each Kn (s, t) that is needed using the stored values of K:(s, t). We can of course create a kernel K (s, t) that combines the different Kn (s, t) giving different (positive) weightings for each n. Once we have create such a kernel it is natural to normalise to remove any bias introduced by document length. We can produce this effect by normalising the feature vectors in the feature space. Hence, we create a new embedding 1>(s) = JJ.!l h· h· . h k I 111>(s)ll' w 1C glVes rIse to t e erne /«(s, t) /. .) / <I>(s) <I>(t) ) \ <I>(s) . <I>(t) = \ 1I<I>(s)11 . 1I <I>(t) 11 1 K(s,t) 1I<I>(s)IIII<I>(t)11 (<I>(s) . <I>(t)) = )K(s, s)K(t, t) The normalised kernel introduced above was implemented using the recursive formulas described above. The next section gives some more details of the algorithmics and this is followed by a section describing the results of applying the kernel in a Support Vector Machine for text classification. 3 Algorithmics In this section we describe how special design techniques provide a significant speedup of the procedure, by both accelerating the kernel evaluations and reducing their number. We used a simple gradient based implementation of SVMs (see [3]) with a fixed threshold. In order to deal with large datasets, we used a form of chunking: beginning with a very small subset of the data and gradually building up the size of the training set, while ensuring that only points which failed to meet margin 1 on the current hypothesis were included in the next chunk. Since each evaluation of the kernel function requires not neglect able computational resources, we designed the system so to only calculate those entries of the kernel matrix that are actually required by the training algorithm. This can significantly reduce the training time, since only a relatively small part of the kernel matrix is actually used by our implementation of SVM. Special care in the implementation of the kernel described in Definition 1 can significantly speed-up its evaluation. As can be seen from the description of the recursion in Definition 2, its computation takes time proportional to n I s Iii 12, as the outermost recursion is over the sequence length and for each length and each additional character in sand i a sum over the sequence i must be evaluated. The complexity of the computation can be reduced to 0 (n I s Iii I), by first evaluating K;'(sx, i) = L Kf_l(s, i[l : j - 1]).xltl-H2 j:tj = x and observing that we can then evaluate KI(s, i) with the O(lsllil) recursion, KI(sx, i) = .xKi(s, i) + KI'(sx, i). Now observe that Ki'(sx, iu) = .x1uIKI'{sx, i), provided x does not occur in u, while K:'(sx, ix) = .x (Kf'( sx, i) + .xKf_l (s, i)) . These observations together give an O( lsl lt l) recursion for computing K:'(s, t). Hence, we can evaluate the overall kernel in O(n lslltl) time. 4 Experimental Results Our aim was to test the efficacy of this new approach to feature extraction for text categorization, and to compare with a state-of-the-art system such as the one used in [6]. Expecially, we wanted to see how the performance is affected by the tunable parameter k (we have used values 3, 5 and 6). As expected, using longer substrings in the comparison of two documents gives an improved performance. We used the same dataset as that reported in [6], namely the Reuters-21578 [7], as well as the Medline doucment collection of 1033 document abstracts from the National Library of Medicine. We performed all of our experiments on a subset of four categories, 'earn', 'acq', 'crude', and 'corn'. A confusion matrix can be used to summarize the performance of the classifier (number of true/false positives/negatives): P N P TP FP N FN TN We define preCISIOn: P = T:~P and recall:R = T:~N. We then define the quantitiy F1 = ;,~~ to measure the performance of the classifier. We applied the two different kernels to a subset of Reuters of 380 training examples and 90 test examples. The only difference in the experiments was the kernel used. The splits of the data were had the following sizes and numbers of positive examples in training and test sets: numbers of positive examples in training (testing) set out of 370 (90): earn 152 (40); 114 (25); 76 (15); 38 (10) in the Reuters database. The preliminary experiment used different values of k, in order to identify the optimal one, with the category 'earn'. The follwing experiments all used a sequence length of 5 for the string subsequences kernel. We set A = 0.5. The results obtained are shown in the following where the precision, recall and F1 values are shown for both kernels. F1 Precision Recall # SV 3 S-K 0.925 0.981 0.878 138 5 S-K 0.936 0.992 0.888 237 6 S-K 0.936 0.992 0.888 268 W-K 0.925 0.989 0.867 250 Table 1: F1, Precision, Recall and number of Support Vectors for top reuter category earn averaged over 10 splits (n S-K == string kernel oflength n, W-K == word kernel 5 S-K kernel W-K kernel F1 Precis. Recall #SV F1 Precis. Recall # SV earn 0.936 0.992 0.888 237 0.925 0.989 0.867 250 acq 0.867 0.914 0.828 269 0.802 0.843 0.7680 276 crude 0.936 0.979 0.90 262 0.904 0.91 0.907 262 corn 0.779 0.886 0.7 231 0.762 0.833 0.71 264 Table 2: Precision, Recall and F1 numbers for 4 categories for the two kernels: word kernel (W-K) and subsequences kernel (5 S-K) The results are better in one category, similar or slightly better for the other categories. They certainly indicate that the new kernel can outperform the more classical approach, but equally the performance is not reliably better. The last table shows the results obtained for two categories in medLine data, numbers 20 and 23. Query Train/Test 3 S-K(#SV) 5 S-K(#SV) 6 S-K(#SV) W-K(#SV) #20 24/15 0.20 (101) 0.637 (295) 0.75 (386) 0.235 (598) #23 22/15 0.534 (107) 0.409 (302) 0.75 (382) 0.636 (618) Table 3: F1 and number of Support Vectors for top two Medline queries 5 Conclusions The paper has presented a novel kernel for text analysis, and tested it on a categorization task, which relies on evaluating an inner product in a very high dimensional feature space. For a given sequence length k (k = 5 was used in the experiments reported) the features are indexed by all strings of length k. Direct computation of all the relevant features would be impractical even for moderately sized texts. The paper has presented a dynamic programming style computation for computing the kernel directly from the input sequences without explicitly calculating the feature vectors. Further refinements of the algorithm have resulted in a practical alternative to the more standard word feature based kernel used in previous SVM applications to text classification [6]. We have presented an experimental comparison of the word feature kernel with our subsequences kernel on a benchmark dataset with encouraging results. The results reported here are very preliminary and many questions remain to be resolved. First more extensive experiments are required to gain a more reliable picture of the performance of the new kernel, including the effect of varying the subsequence length and the parameter).. The evaluation of the new kernel is still relatively time consuming and more research is needed to investigate ways of expediting this phase of the computation. References [1] M. Aizerman, E. Braverman, and L. Rozonoer. Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control, 25:821-837, 1964. [2] B. E. Boser, 1. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In D. Haussler, editor, Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, pages 144-152. ACM Press, 1992. [3] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge University Press, 2000. www.support-vector.net. [4] D. Haussler. Convolution kernels on discrete structures. Technical Report UCSC-CRL-99-10, University of California in Santa Cruz, Computer Science Department, July 1999. [5] T. Joachims. Text categorization with support vector machines: Learning with many relevant features. Technical Report 23, LS VIII, University of Dortmund, 1997. [6] T. Joachims. Text categorization with support vector machines. In Proceedings of European Conference on Machine Learning (ECML), 1998. [7] David Lewis. Reuters-21578 collection. Technical report, Available at: http://www.research.att.com/~ewis/reuters21578.html. 1987. [8] J. Shawe-Taylor and N. Cristianini Margin Distribution and Soft Margin In Advances in Large Margin Classifiers, MIT Press 2000. [9] J. Shawe-Taylor, P. Bartlett, R. Williamson and M. Anthony Structural Risk Minimization over Data-Dependent Hierarchies In EEE Transactions on Information Theory 1998 [10] V. Vapnik. Statistical Learning Theory. Wiley, 1998. [11] C. Watkins. Dynamic alignment kernels. Technical Report CSD-TR-98-11, Royal Holloway, University of London, Computer Science department, January 1999.
2000
59
1,860
Learning Segmentation by Random Walks Marina Meila University of Washington mmp~stat.washington.edu Jianbo Shi Carnegie Mellon University jshi~cs.cmu.edu Abstract We present a new view of image segmentation by pairwise similarities. We interpret the similarities as edge flows in a Markov random walk and study the eigenvalues and eigenvectors of the walk's transition matrix. This interpretation shows that spectral methods for clustering and segmentation have a probabilistic foundation. In particular, we prove that the Normalized Cut method arises naturally from our framework. Finally, the framework provides a principled method for learning the similarity function as a combination of features. 1 Introduction This paper focuses on pairwise (or similarity-based) clustering and image segmentation. In contrast to statistical clustering methods, that assume a probabilistic model that generates the observed data points (or pixels), pairwise clustering defines a similarity function between pairs of points and then formulates a criterion (e.g. maximum total intracluster similarity) that the clustering must optimize. The optimality criteria quantify the intuitive notion that points in a cluster (or pixels in a segment) are similar, whereas points in different clusters are dissimilar. An increasingly popular approach to similarity based clustering and segmentation is by spectral methods. These methods use eigenvalues and eigenvectors of a matrix constructed from the pairwise similarity function. Spectral methods are sometimes regarded as continuous approximations of previously formulated discrete graph theoretical criteria as in image segmentation method of [9], or as in the web clustering method of [4, 2]. As demonstrated in [9,4], these methods are capable of delivering impressive segmentation/clustering results using simple low-level features. In spite of their practical successes, spectral methods are still incompletely understood. The main achievement of this work is to show that there is a simple probabilistic interpretation that can offer insights and serve as an analysis tool for all the spectral methods cited above. We view the pairwise similarities as edge flows in a Markov random walk and study the properties of the eigenvectors and values of the resulting transition matrix. Using this view, we were able to show that several of the above methods are subsumed by the Normalized Cut (NCut) segmentation algorithm of [9] in a sense that will be described. Therefore, in the following, we will focus on the NCut algorithm and will adopt the terminology of image segmentation (i.e. the data points are pixels and the set of all pixels is the image), keeping in mind that all the results are also valid for similarity based clustering. A probabilistic interpretation of NCut as a Markov random walk not only sheds new lights on why and how spectral methods work in segmentation, but also offers a principled way of learning the similarity function. A segmented image can provide a "target" transition matrix to which a learning algorithm matches in KL divergence the "learned" transition probabilities. The latter are output by a model as a function of a set of features measured from the training image. This is described in section 5. Experimental results on learning segmenting objects with smooth and rounded shape is described in section 6. 2 The Normalized Cut criterion and algorithm Here and in the following, an image will be represented by a set of pixels I. A segmentation is a partioning of I into mutually disjoint subsets. For each pair of pixels i, j E I a similarity Sij = Sji ~ 0 is given. In the NCut framework the similarities Sij are viewed as weights on the edges ij of a graph G over I. The matrix S = [Sij] plays the role of a "real-valued" adjacency matrix for G. Let di = LjEf Sij, called the degree of node i, and the volume of a set A c I be vol A = LiEA di· The set of edges between A and its complement A is an edge cut or shortly a cut. The normalized cut (NCut) criterion of [9] is a graph theoretical criterion for segmenting an image into two by minimizing (1) over all cuts A, A. Minimizing NCut means finding a cut ofrelatively small weight between two subsets with strong internal connections. In [9] it is shown that optimizing NCut is NP hard. The NCut algorithm was introduced in [9] as a continuous approximation for solving the discrete minimum NCut problem by way of eigenvalues and eigenvectors. It uses the Laplacian matrix L = D - S where D is a diagonal matrix formed with the degrees of the nodes. The algorithm consists of solving the generalized eigenvalues/vectors problem Lx = )'Dx (2) The NCut algorithm focuses on the second smallest eigenvalue of (2) and its corresponding eigenvector, call them ),L and xL respectively. In [9] it is shown that when there is a partitioning of A, A of I such that L _ {a, i E A Xi (3, i E A (3) then A, A is the optimal NCut and the value of the cut itself is NCut(A, A) This result represents the basis of spectral segmentation by normalized cuts. One solves the generalized spectral problem (2), then finds a partitioning of the elements of xL into two sets containing roughly equal values. The partitioning can be done by thresholding the elements. The partitioning of the eigenvector induces a partition on I which is the desired segmentation. As presented above, the NCut algorithm lacks a satisfactory intuitive explanation. In particular, the NCut algorithm and criterion offer little intuition about (1) what causes xL to be piecewise constant? (2) what happens when there are more than two segments and (3) how does the algorithm degrade its performance when xL is not piecewise constant? The random walk interpretation that we describe now will answer the first two questions as well as give a better understanding of what spectral clustering is achieving. We shall not approach the third issue here: instead, we point to the results of [2] that apply to the NCut algorithm as well. 3 Markov walks and normalized cuts By "normalizing" the similarity matrix S one obtains the stochastic matrix P = D- 1S (4) whose row sums are all 1. As it is known from the theory of Markov random walks, Pij represents the probability of moving from node i to j in one step, given that we are in i. The eigenvalues of Pare A1 = 1 ~ A2 ~ ... An ~ -1; xl.. ·n are the eigenvectors. The first eigenvector of P is Xl =1, the vector whose elements are all Is. W.l.o.g we assume that no node has degree O. Let us now examine the spectral problem for the matrix P, namely the solutions of the equation Px = AX (5) Proposition 1 If A, X are solutions of (5) and P = D- 1S, then (1 - A), x are solutions of (2). In other words, the NCut algorithm and the matrix P have the same eigenvectors; the eigenvalues of P are identical to 1 minues the generalized eigenvalues in (2). Proposition 1 shows the equivalence between the spectral problem formulated by the NCut algorithm and the eigenvalues/vectors of the stochastic matrix P. This also helps explaining why the NCut algorithm uses the second smallest generalized eigenvector: the smallest eigenvector of (2) corresponds to the largest eigenvector of P, which in most cases of interest is equal to 1 thus containing no information. The NCut criterion can also be understood in this framework. First define ?roo = [?ri"]iEI bY?ri" = !oh-. It is easy to verify that pT?roo = ?roo and thus that ?roo is a stationary distribution of the Markov chain. If the chain is ergodic, which happens under mild conditions [1], then ?roo is the only distribution over I with this property. Note also that the Markov chain is reversible because ?ri" Pij = ?rj Pji = Sij /voII. Define PAB = Pr[A -+ BIA] as the probability of the random walk transitioning from set A c I to set B C I in one step if the current state is in A and the random walk is started in its stationary distribution. From this it follows that EiEA,jEB Sij vol(A) NCut(A, A) = PAA + PAA (6) (7) If the NCut is small for a certain partition A, A then it means that the probabilities of evading set A, once the walk is in it and of evading its complement A are both small. Intuitively, we have partioned the set I into two parts such that the random walk, once in one of the parts, tends to remain in it. The NCut is strongly related to a the concept of low conductivity sets in a Markov random walk. A low conductivity set A is a subset of I such that h(A) = max( PAA, PAA ) is small. They have been studied in spectral graph theory in connection with the mixing time of Markov random walks [1]. More recently, [2] uses them to define a new criterion for clustering. Not coincidentally, the heuristic analyzed there is strongly similar to the NCut algorithm. 4 Stochastic matrices with piecewise constant eigenvectors In the following we will use the transition matrix P to achieve a better understanding of the NCut algorithm. Recall that the NCut algorithm looks at the second "largest" eigenvector of P, denoted by X2 and equal to X L, in order to obtain a partioning of I. We define a vector x to be piecewise constant relative to a partition ~ = (Al' A 2 , ••. Ak) of I iff Xi = Xj for i,j pixels in the same set As, s = 1, ... k. Since having piecewise constant eigenvectors is ideal case for spectral segmentation, it is important to understand when the matrix P has this desired property. We study when the first k out of n eigenvectors are piecewise constant. Proposition 2 Let P be a matrix with rows and columns indexed by I that has independent eigenvectors. Let ~ = (Al' A 2 , •• . Ak) be a partition of I. Then, P has k eigenvectors that are piecewise constant w. r. t. ~ and correspond to non-zero eigenvalues if and only if the sums Pis = l:jEA. Pij are constant for all i E As' and all s, s' = 1, ... k and the matrix R = [Pss' ]s,s'=l,."k (with Pss' = l:jEA ~ Pij , i E As) is non-singular. Lemma 3 If the matrix P of dimension n is of the form P = D- l S with S symmetric and D non-singular then P has n independent eigenvectors. We call a stochastic matrix P satisfying the conditions of Proposition 2 a blockstochastic matrix. Intuitively, Proposition 2 says that a stochastic matrix has piecewise constant eigenvectors if the underlying Markov chain can be aggregated into a Markov chain with state space ~ = {Al , . .. Ak } and transition probability matrix P. This opens interesting connections between the field of spectral segmentation and the body of work on aggregability or (lumpability) [3] of Markov chains. The proof of Proposition 2 is provided in [5]. Proposition 2 shows that a much broader condition exists for N cut algorithm to produce an exact segmentation/clustering solution. Such condition shows that in fact spectral clustering is able to group pixels by the similarity of their transition probabilities to subsets of I. Experiments [9] show that NCut works well on many graphs that have a sparse complex connection structure supporting this result with practical evidence. Proposition 2 generalizes previous results of [10]. The NCut algorithm and criterion is one of the recently proposed spectral segmentation methods. In image segmentation, there are algorithms of Perona and Freeman (PF) [7] and Scott and Longuet-Higgins (SLH) [8]. In web clustering, there are algorithms of Kleinberg[4] (K), the long known latent semantic analysis (LSA), and in the variant proposed by Kannan, Vempala and Yetta (KVV) [2]. It is easy to show that each of the above ideal situations imply that the resulting stochastic matrix P satisfies the conditions of Proposition 2 and thus the NCut algorithm will also work exactly in these situations. In this sense NCut subsumes PF, SLH and (certain variants of) K. Moreover, none of the three other methods takes into account more information than NCut does. Another important aspect of a spectral clustering algorithm is robustness. Empirical results of [10] show that NCut is at least as robust as PF and SLH. 5 The framework for learning image segmentation The previous section stressed the connection between NCut as a criterion for image segmentation and searching for low conductivity sets in a random walk. Here we will exploit this connection to develop a framework for supervised learning of image segmentation. Our goal is to obtain an algorithm that starts with a training set of segmented images and with a set of features and learns a function of the features that produces correct segmentations, as shown in figure 1. Learning Image Featules ,g 'J~ '1 j P,j ....... ~ ..... == ~~ Human labeled Segmentation Figure 1: The general framework for learning image segmentation. For simplicity, assume the training set consists of one image only and its correct segmentation. From the latter it is easy to obtain "ideal" or target transition probabilities p .*. = {Oi ~ (j. A 1J TAT' J EA. for i in segment A with IAI elements (8) We also have a predefined set of features r, q = 1, ... Q which measure similarity between two pixels according to different criteria and their values for 1. The model is the part of the framework that is subject to learning. It takes the features fi~ as inputs and outputs the global similarity measure Sij. For the present experiments we use the simple model Sij = eL:q Aql;; Intuitively, it represents a set of independent "experts", the factors eAql" voting on the probability of a transition i -+ j. In our framework, based on the fact that a segmentation is equivalent to a random walk, optimality is defined as the minimization of the conditional K ullback-Leibler (KL) divergence between the target probabilities Ptj and the transition probabilities Pij obtained by normalizing Sij. Because P* is fixed, the above minimization is equivalent to maximizing the cross entropy between the two (conditional) distributions, i.e. max J, where J = L I~I LPtj logPij iEI jEI (9) If we interpret the factor 1/lll as a uniform distribution over states 71"0 then the criterion in (9) is equivalent to the KL divergence between two distributions over transitions KL(Pi~jllPi-+j) where pt:j = 7I"?Pi~*) ' Maximizing J can be done via gradient ascent in the parameters A. We obtain oj _ 1 ('"' * f q '"' f q ) OAq TIf ~ Pij ij - ~ Pij ij 1J 1J (10) One can further note that the optimum of J corresponds to the solution of the following maximum entropy problem: maxH(jli) S.t. < fi~ > ... OP;l i = < f& > ... OP;ji for q = 1, ... Q (11) P;li Since this is a convex optimization problem, it has a unique optimum. 6 Segmentation with shape and region information In this section, we exemplify our approach on a set of synthetic and real images and we use features carrying contour and shape information. First we use a set of local filer banks as edge detectors. They capture both edge strength and orientation. From this basic information we construct two features: the intervening contour (IC) and the co-linearity/co-circularity (CL). (a) (b) (c) .' D lC DeL 00,' ,~ ;A \u .. • J Figure 2: Features for segmenting objects with smooth rounded shape. (a) The edge strength provides a cue of region boundary. It biases against random walks in a direction orthogonal to an edge. (b) Edge orientation provides a cue for the object's shape. The induced edge flow is used to bias the random walk along the edge, and transitions between co-circular edge flows are encouraged. (c) Edge flow for the bump in figure 3. (g) (h) (d) (e) (f) lIO 02 O. 06 08 1 12 U 0750 ~o 100 1S(! l!\IO ~ 3tXI onoovO"oogoccnlrasl ImaQem1cnsrlyr<rge Figure 3: "Bump" images (a)-(f) with gradually reduced contrast are used for training. (g) shows the relation between the image edge contrast and the learned value of AIC, demonstrating automatic adaptation to the dynamic range of the IC. (h) shows the dependence on image contrast of ACL. At low image contrast, CL becomes more important. The first feature is based on the assumption that if two pixels are separated by an edge, then they are less likely to belong together(figure 2). In the random walk interpretation, we are less likely to walk in a direction perpendicular to an edge. The intervening contour[6] is computed by ifF = MAXkEI(i,nEdge(k), where l(i,j) is a line connecting pixel i and j, and Edge(k) is the edge strength at pixel k. While the IC provides a cue for region boundaries, the edge orientation provides a cue for object shape. Human visual studies suggest that the shape of an object's boundary has a strong influence on how objects are grouped. For example, a convex region is more likely to be perceived as a single object Thinking of segmentation as a random walk provides a natural way of exploiting this knowledge. Each discrete edge in the image induces an edge flow in its neighborhood. To favor convex regions, we can further bias the random walk by enhancing the transition probabilities between pixels with co-circular edge flow. Thus we define the CL feature as: JGL 2-cos(2a;)-cos(2aj) + 2-cos(2ai+aj) h d fi d . fi 2(b) ij = l-cos(ad l-cos(ao ) , were ai, aj are e ne as III gure . For training, we have constructed the set of "bump" images with varying image contrast, as shown in figure 3. Figure 4 shows segmentation results using the weights trained with the "bump" image in figure 3(c). Figure 4: Testing on real images: (a) test images; (b) canny edges computed with the Matlab "edge" function; (c) NCut segmentation computed using the weights learned on the image in 5(c). The system learns to prefer contiguous groups with smooth boundary. The canny edge map indicates that simply looking for edges is likely gives brittle and less meaningful segmentations. 7 Conclusion The main contribution of our paper is showing that spectral segmentation methods have a probabilistic foundation. In the framework of random walks, we give a new interpretation to the NCut criterion and algorithm and a better understanding of its motivation. The probabilistic framework also allows us to define a principled criterion for supervised learning of image segmentation. Acknowledgment: J.S. is supported by DARPA NOOOI4-00-1-0915, NSF IRI-9817496. References [1] Fan R. K. Chung. Spectral Graph Theory. American Methematical Society, 1997. [2] Ravi Kannan, Santosh Vempala, and Adrian Yetta. On clusterings: good, bad and spectral. In Proc. 41st Symposium on the Foundations of Computer Science, 2000. [3] J. R. Kemeny and J. L. Snell. Finite Markov Chains. Van Nostrand, New York, 1960. [4] Jon M. Kleinberg. Authoritative sources in a hyperlinked environment. Technical report, IBM Research Division, Almaden Research Center, 1997. [5] M. Maila and J. Shi. A random walks view of spectral segmentation. In Proc. International Workshop on AI and Statistics(AISTATS), 200l. [6] Jitendra Malik, Serge Belongie, Thomas Leung, and Jianbo Shi. Contour and texture analysis for image segmentation. International Journal of Computer Vision, 2000. [7] P. Perona and W. Freeman. A factorization approach to grouping. In European Conference on Computer Vision, 1998. [8] G.L. Scott and H. C. Longuet-Higgins. Feature grouping by relocalsation of eigenvectors of the proximity matrix. In Proc. British Machine Vision Conference, 1990. [9] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000. An earlier version appeared in CVPR 1997. [10] Y. Weiss. Segmentation using eigenvectors: a unifying view. In International Conference on Computer Vision, 1999.
2000
6
1,861
Model Complexity, Goodness of Fit and Diminishing Returns Igor V. Cadez Information and Computer Science University of California Irvine, CA 92697-3425, U.S.A. Padhraic Smyth Information and Computer Science University of California Irvine, CA 92697-3425, U.S.A. Abstract We investigate a general characteristic of the trade-off in learning problems between goodness-of-fit and model complexity. Specifically we characterize a general class of learning problems where the goodness-of-fit function can be shown to be convex within firstorder as a function of model complexity. This general property of "diminishing returns" is illustrated on a number of real data sets and learning problems, including finite mixture modeling and multivariate linear regression. 1 Introduction, Motivation, and Related Work Assume we have a data set D = {Xl, X2, ... , xn }, where the X i could be vectors, sequences, etc. We consider modeling the data set D using models indexed by a complexity index k, 1 :::; k :::; kmax • For example, the models could be finite mixture probability density functions (PDFs) for vector Xi'S where model complexity is indexed by the number of components k in the mixture. Alternatively, the modeling task could be to fit a conditional regression model y = g(Zk) + e, where now y is one of the variables in the vector X and Z is some subset of size k of the remaining components in the X vector. Such learning tasks can typically be characterized by the existence of a model and a loss function. A fitted model of complexity k is a function of the data points D and depends on a specific set of fitted parameters B. The loss function (goodnessof-fit) is a functional of the model and maps each specific model to a scalar used to evaluate the model, e.g., likelihood for density estimation or sum-of-squares for regression. Figure 1 illustrates a typical empirical curve for loss function versus complexity, for mixtures of Markov models fitted to a large data set of 900,000 sequences. The complexity k is the number of Markov models being used in the mixture (see Cadez et al. (2000) for further details on the model and the data set). The empirical curve has a distinctly concave appearance, with large relative gains in fit for low complexity models and much more modest relative gains for high complexity models. A natural question is whether this concavity characteristic can be viewed as a general phenomenon in learning and under what assumptions on model classes and Nwnber of M Ixture Cmnponen1S 11] Figure 1: Log-likelihood scores for a Markov mixtures data set. loss functions the concavity can be shown to hold. The goal of this paper is to illustrate that in fact it is a natural characteristic for a broad range of problems in mixture modeling and linear regression. We note of course that for generalization that using goodness-of-fit alone will lead to the selection of the most complex model under consideration and will not in general select the model which generalizes best to new data. Nonetheless our primary focus of interest in this paper is how goodness-of-fit loss functions (such as likelihood and squared error, defined on the training data D) behave in general as a function of model complexity k. Our concavity results have a number of interesting implications. For example, for model selection methods which add a penalty term to the goodness-of-fit (e.g., BIC), the resulting score function as a function of model complexity will be unimodal as a function of complexity k within first order. Li and Barron (1999) have shown that for finite mixture models the expected value of the log-likelihood for any k is bounded below by a function of the form -C /k where C is a constant which is independent of k. The results presented here are complementary in the sense that we show that the actual maximizing log-likelihood itself is concave to first-order as a function of k. Furthermore, we obtain a more general principle of "diminishing returns," including both finite mixtures and subset selection in regression. 2 Notation We define y = y(x) as a scalar function of x, namely a prediction at x. In linear regression y = y(x) is a linear function of the components in x while in density estimation y = y(x) is the value of the density function at x. Although the goals of regression and density estimation are quite different, we can view them both as simply techniques for approximating an unknown true function for different values of x. We denote the prediction of a model of complexity k as Yk (xIB) where the subscript indicates the model complexity and B is the associated set of fitted parameters. Since different choices of parameters in general yield different models, we will typically abbreviate the notation somewhat and use different letters for different parameterizations of the same functional form (i.e., the same complexity), e.g., we may use Yk(X),gk(X), hk(X) to refer to models of complexity k instead of specifying Yk(xIBd,Yk(xIB2),Yk(xIB3 ), etc. Furthermore, since all models under discussion are functions of x, we sometimes omit the explicit dependence on x and use a compact notation Yk, 9k, hk· We focus on classes of models that can be characterized by more complex models having a linear dependence on simpler models within the class. More formally, any model of complexity k can be decomposed as: Yk = a191 + a2h1 + ... + akW 1· (1) In PDF mixture modeling we have Y k = p( x) and each model 91, hI, .. . ,Zl is a basis PDF (e.g., a single Gaussian) but with different parameters. In multivariate linear regression each model 91, hI, ... ,WI represents a regression on a single variable, e.g., 91(X) above is 91(X) = 'Ypxp where xp is the p-th variable in the set and 'Yp is the corresponding coefficient one would obtain if regressing on xp alone. One of the 91, hI, ... ,WI can be a dummy constant variable to account for the intercept term. Note that the total parameters for the model Yk in both cases can be viewed as consisting of both the mixing proportions (the a's) and the parameters for each individual component model. The loss function is a functional on models and we write it as E(Yk). For simplicity, we use the notation EZ to specify the value of the loss function for the best kcomponent model. This way, EZ :S E(Yk) for any model Yk1. For example, the loss function in PDF mixture modeling is the negative log likelihood. In linear regression we use empirical mean squared error (MSE) as the loss function. The loss functions of general interest in this context are those that decompose into a sum of functions over data points in the data set D (equivalently an independence assumption in a likelihood framework), i.e., n (2) i=l For example, in PDF mixture modeling !(Yk) = -In Yk, while in regression modeling !(Yk) = (y - Yk)2 where Y is a known target value. 3 Necessary Conditions on Models and Loss Functions We consider models that satisfy several conditions that are commonly met in real data analysis applications and are satisfied by both PDF mixture models and linear regression models: 1. As k increases we have a nested model class, i.e., each model of complexity k contains each model of complexity k' < k as a special case (i.e., it reduces to a simpler model for a special choice of the parameters). 2. Any two models of complexities k1 and k2 can be combined as a weighted sum in any proportion to yield a valid model of complexity k = k1 + k2. 3. Each model of complexity k = k1 + k2 can be decomposed into a weighted sum of two valid models of complexities k1 and k2 respectively for each valid choice of k1 and k2. The first condition guarantees that the loss function is a non-increasing function of k for optimal models of complexity k (in sense of minimizing the loss function E), the second condition prevents artificial correlation between the component models, while the third condition guarantees that all components are of equal expressive power. As an example, the standard Gaussian mixture model satisfies all three properties whether the covariance matrices are unconstrained or individually constrained. As a counter-example, a Gaussian mixture model where the covariance matrices are constrained to be equal across all components does not satisfy the second property. lWe assume the learning task consists of minimization of the loss function. If maximization is more appropriate, we can just consider minimization of the negative of the loss function. 4 Theoretical Results on Loss Function Convexity We formulate and prove the following theorem: Theorem 1: In a learning problem that satisfies the properties from Section 3, the loss function is first order convex in model complexity k, meaning that EZ+1 - 2EZ + EZ_ 1 ~ 0 within first order (as defined in the proof). The quantities EZ and EZ±l are the values of the loss function for the best k and k ± I-component models. Proal: In the first part of the proof we analyze a general difference of loss functions and write it in a convenient form. Consider two arbitrary models, 9 and hand the corresponding loss functions E(g) and E(h) (g and h need not have the same complexity). The difference in loss functions can be expressed as: n E(g) - E(h) L {I [g(Xi)] - I [h(Xi)]} i=l n L {I [h(xi)(1 + Jg,h(Xi))]- I [h(Xi)]} i=l n = a L h(Xi)!' (h(Xi)) Jg,h(Xi). (3) i=l where the last equation comes from a first order Taylor series expansion around each Jg,h(Xi) = 0, a is an unknown constant of proportionality (to make the equation exact) and J () -=- g(x) - h(x) g,h X h(x) (4) represents the relative difference in models 9 and h at point x. For example, Equation 3 reduces to a first order Taylor series approximation for a = 1. If I (y) is a convex function we also have: n E(g) - E(h) ~ L h(Xi)!'(h(Xi))Jg,h(Xi). (5) i=l since the remainder in the Taylor series expansion R2 = I/2f"(h(I + 8J))J2 ~ O. In the second part of the proof we use Equation 5 to derive an appropriate condition on loss functions. Consider the best k and k ± I-component models and the appropriate difference of the corresponding loss functions EZ+1 - 2EZ + EZ_ 1 , which we can write using the notation from Equation 3 and Equation 5 (since we consider convex functions I(y) = -lny for PDF modeling and I(y) = (y - Yi)2 for best subset regression) as: EZ+1 - 2EZ + EZ_ 1 = n n i=l i=l n n > LyZ(Xi)!'(yZ(Xi))JYZ+1,YZ(Xi) + L yZ(Xi)!'(yZ(Xi))JyZ_1,YZ (Xi) i=l i=l n = LyZ(Xi)!'(yZ(Xi)) [JYZ+1 ,YZ(Xi) + JYZ_1,YZ(Xi)] . (6) i=l According to the requirements on models in Section 3, the best k + I-component model can be decomposed as Y'k+1 = (1 - E)gk + Eg1, where gk is a k-component model and gl is a I-component modeL Similarly, an artificial model can be constructed from the best k - I-component model: ek = (1 - E)Y'k-1 + Eg1· Upon subtracting y'k from each of the equations and dividing by Y'k, using notation from Equation 4, we get: o • • Yk+1 'Yk O~k ,y;' (1 - E)09k ,y;' + EOg1 ,y;, = (1- E)OY;'_l'Y;' + EOg1 ,y;" which upon subtraction and rearrangement of terms yields: Oy' y' + Oy' y' = (1 - E)09k y' + oCk y' + EOy' y" (7) k+l'k 1;:-I'k ' k ~ ' k 1;:-I'k If we evaluate this equation at each of the data points Xi and substitute the result back into equation 6 we get: E'k+1 - 2E'k + E'k-1 ~ n LY'k (Xi)!' (Y'k(Xi)) [(1 - E)09k ,y;, (Xi) + O~k ,y;' (Xi) + EOy;'_l'Y;' (Xi)]' (8) i=l In the third part of the proof we analyze each of the terms in Equation 8 using Equation 3. Consider the first term: n ilgk ,y;, = LY'k(xd!'(Y'k(Xi))09k ,y;, (Xi) (9) i=l that depends on a relative difference of models gk and y'k at each of the data points Xi . According to Equation 3, for small 09k ,Y;' (Xi) (which is presumably true), we can set a: :::::; 1 to get a first order Taylor expansion. Since y'k is the best k-component model, we have E(gk) ~ E(y'k) = Ek and consequently E(gk) - E(yk) = a:ilgk ,y;, :::::; ilgk ,y;, ~ 0 (10) Note that in order to have the last inequality hold, we do not require that a: :::::; 1, but only that a:~0 (11) which is a weaker condition that we refer to as the first order approximation. In other words, we only require that the sign is preserved when making Taylor expansion while the actual value need not be very accurate. Similarly, each of the three terms on the right hand side of Equation 8 is first order positive since E(yk) ::; E(gk), E(ek), E(Y'k-1)' This shows that Ek+1 - 2Ek + E'k-1 ~ 0 within first order, concluding the proof. 5 Convexity in Common Learning Problems In this section we specialize Theorem 1 to several well-known learning situations. Each proof consists of merely selecting the appropriate loss function E (y) and model family y. 5.1 Concavity of Mixture Model Log-Likelihoods Theorem 2: In mixture model learning, using log-likelihood as the loss function and using unconstrained mixture components, the in-sample log likelihood is a firstorder concave function of the complexity k. Prool: By using I (y) = -In Y in Theorem 1 the loss function E(y) becomes the negative of the in-sample log likelihood, hence it is a first-order convex function of complexity k, i.e., the log likelihood is first-order concave. Corollary 1: If a linear or convex penalty term in k is subtracted from the in-sample log likelihood in Theorem 2, using the mixture models as defined in Theorem 2, then the penalized likelihood can have at most one maximum to within first order. The BIC criterion satisfies this criterion for example. 5.2 Convexity of Mean-Square-Error for Subset Selection in Linear Regression Theorem 3: In linear regression learning where Yk represents the best linear regression defined over all possible subsets of k regression variables, the mean squared error (MSE) is first-order convex as a function of the complexity k. Prool: We use I(Yk(xi)) = (Yi - ydXi))2 which is a convex function of Yk . The corresponding loss function E(Yk) becomes the mean-square-error and is first-order convex as a function of the complexity k by the proof of Theorem 1. Corollary 2: If a concave or linear penalty term in k is added to the mean squared error as defined in Theorem 3, then the resulting penalized mean-square-error can have at most one minimum to within first order. Such penalty terms include Mallow's Cp criterion, AIC, BIC, predicted squared error, etc., (e.g., see Bishop (1995)). 6 Experimental Results In this section we demonstrate empirical evidence of the approximate concavity property on three different data sets with model families and loss functions which satisfy the assumptions stated earlier: 1. Mixtures 01 Gaussians: 3962 data points in 2 dimensions, representing the first two principal components of historical geopotential data from upper-atmosphere data records, were fit with a mixture of k Gaussian components, k varying from 1 to 20 (see Smyth, Ide, and Ghil (1999) for more discussion ofthis data). Figure 2(a) illustrates that the log-likelihood is approximately concave as a function of k. Note that it is not completely concave. This could be a result of either local maxima in the fitting process (the maximum likelihood solutions in the interior of parameter space were selected as the best obtained by EM from 10 different randomly chosen initial conditions), or may indicate that concavity cannot be proven beyond a first-order characterization in the general case. 2. Mixtures 01 Markov Chains: Page-request sequences logged at the msnbc. com Web site over a 24-hour period from over 900,000 individuals were fit with mixtures of first-order Markov chains (see Cadez et al. (2000) for further details). Figure 1 again clearly shows a concave characteristic for the log-likelihood as a function of k, the number of Markov components in the model. 3. Subset Selection in Linear Regression: Autoregressive (AR) linear models were fit (closed form solutions for the optimal model parameters) to a monthly financial time series with 307 observations, for all possible combinations of lags (all possible '66 , .. '58 Nwnbcr of Mixture Components rt] Number of Regressloo Van ables [It] Figure 2: (a) In-sample log-likelihood for mixture modeling of the atmospheric data set, (b) mean-squared error for regression using the financial data set. subsets) from order k = 1 to order k = 12. For example, the k = 1 model represents the best model with a single predictor from the previous 12 months, not necessarily the AR(l) model. Again the goodness-of-fit curve is almost convex in k (Figure 2(b», except at k = 9 where there is a slight non-concavity: this could again be either a numerical estimation effect or a fundamental characteristic indicating that concavity is only true to first-order. 7 Discussion and Conclusions Space does not permit a full discussion of the various implications of the results derived here. The main implication is that for at least two common learning scenarios the maximizing/minimizing value of the loss function is strongly constrained as model complexity is varied. Thus, for example, when performing model selection using penalized goodness-of-fit (as in the Corollaries above) variants of binary search may be quite useful in problems where k is very large (in the mixtures of Markov chains above it is not necessary to fit the model for all values of k, i.e., we can simply interpolate within first-order). Extensions to model selection using loss-functions defined on out-of-sample test data sets can also be derived, and can be carried over under appropriate assumptions to cross-validation. Note that the results described here do not have an obvious extension to non-linear models (such as feed-forward neural networks) or loss-functions such as the 0/1 loss for classification. References Bishop, C., Neural Networks for Pattern Recognition, Oxford University Press, 1995, pp. 376- 377. Cadez, 1., D. Heckerman, C. Meek, P. Smyth, and S. White, 'Visualization of navigation patterns on a Web site using model-based clustering,' Technical Report MS-TR-00-18, Microsoft Research, Redmond, WA. Li, Jonathan Q., and Barron, Andrew A., 'Mixture density estimation,' presented at NIPS 99. Smyth, P., K. Ide, and M. Ghil, 'Multiple regimes in Northern hemisphere height fields via mixture model clustering,' Journal of the Atmospheric Sciences, vol. 56, no. 21, 3704- 3723, 1999.
2000
60
1,862
Fast Training of Support Vector Classifiers F. Perez-Cruzt, P. L. Alarc6n-Dianat, A. Navia-V azquez:j:and A. Artes-Rodriguez:j:. tDpto. Teoria de la Seiial y Com., Escuela Politecnica, Universidad de Alcala. 28871-Alcala de Henares (Madrid) Spain. e-mail: fernando@tsc.uc3m.es :j:Dpto. Tecnologias de las comunicaciones, Escuela Politecnica Superior, Universidad Carlos ill de Madrid, Avda. Universidad 30, 28911-Leganes (Madrid) Spain. Abstract In this communication we present a new algorithm for solving Support Vector Classifiers (SVC) with large training data sets. The new algorithm is based on an Iterative Re-Weighted Least Squares procedure which is used to optimize the SVc. Moreover, a novel sample selection strategy for the working set is presented, which randomly chooses the working set among the training samples that do not fulfill the stopping criteria. The validity of both proposals, the optimization procedure and sample selection strategy, is shown by means of computer experiments using well-known data sets. 1 INTRODUCTION The Support Vector Classifier (SVC) is a powerful tool to solve pattern recognition problems [13, 14] in such a way that the solution is completely described as a linear combination of several training samples, named the Support Vectors. The training procedure for solving the SVC is usually based on Quadratic Programming (QP) which presents some inherent limitations, mainly the computational complexity and memory requirements for large training data sets. This problem is typically avoided by dividing the QP problem into sets of smaller ones [6, 1, 7, 11], that are iteratively solved in order to reach the SVC solution for the whole set of training samples. These schemes rely on an optimizing engine, QP, and in the sample selection strategy for each sub-problem, in order to obtain a fast solution for the SVC. An Iterative Re-Weighted Least Squares (IRWLS) procedure has already been proposed as an alternative solver for the SVC [10] and the Support Vector Regressor [9], being computationally efficient in absolute terms. In this communication, we will show that the IRWLS algorithm can replace the QP one in any chunking scheme in order to find the SVC solution for large training data sets. Moreover, we consider that the strategy to decide which training samples must j oin the working set is critical to reduce the total number of iterations needed to attain the SVC solution, and the runtime complexity as a consequence. To aim for this issue, the computer program SV cradit have been developed so as to solve the SVC for large training data sets using IRWLS procedure and fixed-size working sets. The paper is organized as follows. In Section 2, we start by giving a summary of the IRWLS procedure for SVC and explain how it can be incorporated to a chunking scheme to obtain an overall implementation which efficiently deals with large training data sets. We present in Section 3 a novel strategy to make up the working set. Section 4 shows the capabilities of the new implementation and they are compared with the fastest available SVC implementation, SV Mlight [6]. We end with some concluding remarks. 2 IRWLS-SVC In order to solve classification problems, the SVC has to minimize Lp = ~llwI12+CLei- LJliei- LQi(Yi(¢(xifw+b)-l+ei) (1) i i i with respectto w, band ei and maximize it with respectto Qi and Jli, subject to Qi, Jli ~ 0, where ¢(.) is a nonlinear transformation (usually unknown) to a higher dimensional space and C is a penalization factor. The solution to (1) is defined by the Karush-Kuhn-Tucker (KKT) conditions [2]. For further details on the SVC, one can refer to the tutorial survey by Burges [2] and to the work ofVapnik [13, 14]. In order to obtain an IRWLS procedure we will first need to rearrange (1) in such a way that the terms depending on ei can be removed because, at the solution C - Qi - Jli = 0 Vi (one of the KKT conditions [2]) must hold. Lp = 1 211wl12 + L Qi(l- Yi(¢T(Xi)W + b)) i = (2) where The weighted least square nature of (2) can be understood if ei is defined as the error on each sample and ai as its associated weight, where! IIwl12 is a regularizing functional. The minimization of (2) cannot be accomplished in a single step because ai = ai(ei), and we need to apply an IRWLS procedure [4], summarized below in tree steps: 1. Considering the ai fixed, minimize (2). 2. Recalculate ai from the solution on step 1. 3. Repeat until convergence. In order to work with Reproducing Kernels in Hilbert Space (RKHS), as the QP procedure does, we require that w = Ei (JiYi¢(Xi) and in order to obtain a non-zero b, that Ei {JiYi = O. Substituting them into (2), its minimum with respect to {Ji and b for a fixed set of ai is found by solving the following linear equation systeml (3) IThe detailed description of the steps needed to obtain (3) from (2) can be found in [10]. where y = [Yl, Y2, ... Yn]T (H)ij = YiYj¢T(Xi)¢(Xj) = YiyjK(Xi,Xj) (Da)ij = aio[i - j] 13 = [,81, ,82, ... , ,8n]T 'r/i,j = 1, ... ,n 'r/i,j = 1, ... ,n (4) (5) (6) (7) and 0[·] is the discrete impulse function. Finally, the dependency of ai upon the Lagrange multipliers is eliminated using the KKT conditions, obtaining { a, eiYi < ° ai = ~ .e. > ° ei Yi' Yt t (8) 2.1 IRWLS ALGORITHMIC IMPLEMENTATION The SVC solution with the IRWLS procedure can be simplified by dividing the training samples into three sets. The first set, SI, contains the training samples verifying ° < ,8i < C, which have to be determined by solving (3). The second one, S2, includes every training sample whose,8i = 0. And the last one, S3, is made up of the training samples whose ,8i = C. This division in sets is fully justified in [10]. The IRWLS-SVC algorithm is shown in Table 1. 0. Initialization: SI will contain every training sample, S2 = 0 and S3 = 0. Compute H. e_a = y, f3_a = 0, b_a = 0, G 13 = Gin, a = 1 and G b3 = G bin . 1 Solve [ (H)SbS1 + D(alS1 (Y)Sl 1 [ (f3)Sl ] = [ 1 - G 13 ] . (y ) ~1 ° b G b3 ' (13) S2 = ° and (13) Ss = C 2. e = e-lt - DyH(f3 - f3_a) - (b - b_a)1 {a, eiYi < ° . 3. ai = ~ e- _ > O'r/Z E SI U S2 U S3 ei Yi ' tYt 4. Sets reordering: a. Move every sample in S3 with eiYi < ° to S2. b. Move every sample in SI with ,8i = C to S3. c. Move every sample in SI with ai = ° to S2 . d. Move every sample in S2 with ai :I ° to SI. 5. e_a = e, f3_a = 13, G 13 = (H)Sl,SS (f3)ss + (Gin)Sl' b-lt = band Gb3 = -y~s (f3)ss + Gbin · 6. Go to step 1 and repeat until convergence. Table 1: IRWLS-SVC algorithm. The IRWLS-SVC procedure has to be slightly modified in order to be used inside a chunk:ing scheme as the one proposed in [8, 6], such that it can be directly applied in the one proposed in [1]. A chunking scheme is needed to solve the SVC whenever H is too large to fit into memory. In those cases, several SVC with a reduced set of training samples are iteratively solved until the solution for the whole set is found. The samples are divide into a working set, Sw, which is solved as a full SVC problem, and an inactive set, Sin. If there are support vectors in the inactive set, as it might be, the inactive set modifies the IRWLSSVC procedure, adding a contribution to the independent term in the linear equation system (3). Those support vectors in Sin can be seen as anchored samples in S3, because their ,8i is not zero and can not be modified by the IRWLS procedure. Then, such contribution (Gin and Gbin ) will be calculated as G 13 and Gb3 are (Table 1, 5th step), before calling the IRWLS-SVC algorithm. We have already modified the IRWLS-SVC in Table 1 to consider Gin and Gbin , which must be set to zero if the Hessian matrix, H, fits into memory for the whole set of training samples. The resolution of the SVC for large training data sets, employing as minimization engine the IRWLS procedure, is summarized in the following steps: 1. Select the samples that will form the working set. 2. Construct Gin = (H)Sw,Sin (f3)s.n and Gbin = -yIin (f3)Sin 3. Solve the IRWLS-SVC procedure, following the steps in Table 1. 4. Compute the error of every training sample. 5. If the stopping conditions Yiei < C eiYi> -c leiYil < C 'Vii (Ji = 0 'Vii (Ji = C 'Vii 0 < (Ji < C are fulfilled, the SVC solution has been reached. (9) (10) (11) The stopping conditions are the ones proposed in [6] and C must be a small value around 10 - 3, a full discussion concerning this topic can be found in [6]. 3 SAMPLE SELECTION STRATEGY The selection of the training samples that will constitute the working set in each iteration is the most critical decision in any chunking scheme, because such decision is directly involved in the number of IRWLS-SVC (or QP-SVC) procedures to be called and in the number of reproducing kernel evaluations to be made, which are, by far, the two most time consuming operations in any chunking schemes. In order to solve the SVC efficiently, we first need to define a candidate set of training samples to form the working set in each iteration. The candidate set will be made up, as it could not be otherwise, with all the training samples that violate the stopping conditions (9)-(11); and we will also add all those training samples that satisfy condition (11) but a small variation on their error will make them violate such condition. The strategies to select the working set are as numerous as the number of problems to be solved, but one can think three different simple strategies: • Select those samples which do not fulfill the stopping criteria and present the largest I ei I values. • Select those samples which do not fulfill the stopping criteria and present the smallest I ei I values. • Select them randomly from the ones that do not fulfill the stopping conditions. The first strategy seems the more natural one and it was proposed in [6]. If the largest leil samples are selected we guanrantee that attained solution gives the greatest step towards the solution of (1). But if the step is too large, which usually happens, it will cause the solution in each iteration and the (Ji values to oscillate around its optimal value. The magnitude of this effect is directly proportional to the value of C and q (size of the working set), so in the case ofsmall C (C < 10) and low q (q < 20) it would be less noticeable. The second one is the most conservative strategy because we will be moving towards the solution of (1) with small steps. Its drawback is readily discerned if the starting point is inappropriate, needing too many iterations to reach the SVC solution. The last strategy, which has been implemented together with the IRWLS-SVC procedure, is a mid-point between the other two, but if the number of samples whose 0 < (3i < C increases above q there might be some iterations where we will make no progress (working set is only made up of the training samples that fulfill the stopping condition in (11)). This situation is easily avoided by introducing one sample that violates each one of the stopping conditions per class. Finally, if the cardinality of the candidate set is less than q the working set is completed with those samples that fulfil the stopping criteria conditions and present the least leil. In summary, the sample selection strategy proposed is2: 1. Construct the candidate set, Se with those samples that do not fulfill stopping conditions (9) and (10), and those samples whose (3 obeys 0 < (3i < C. 2. IfISel < ngot05. 3. Choose a sample per class that violates each one of the stopping conditions and move them from Se to the working set, SW. 4. Choose randomly n - ISw I samples from Se and move then to SW. Go to Step 6. 5. Move every sample form Se to Sw and then-ISwl samples that fulfill the stopping conditions (9) and (10) and present the lowest leil values are used to complete SW. 6. Go on, obtaining Gin and Gbin. 4 BENCHMARK FOR THE IRWLS-SVC We have prepared two different experiments to test both the IRWLS and the sample selection strategy for solving the SVc. The first one compares the IRWLS against QP and the second one compares the samples selection strategy, together with the IRWLS, against a complete solving procedure for SVC, the SV Mlight. In the first trial, we have replaced the LOQO interior point optimizer used by SV M1ight version 3.02 [5] by the IRWLS-SVC procedure in Table 1, to compare both optimizing engines with equal samples selection strategy. The comparison has been made over a Pentium ill-450MHz with 128Mb running on Window98 and the programs have been compiled using Microsoft Developer 6.0. In Table 2, we show the results for two data sets: the first Adult44781 Splice 2175 CPU time Optimize Time CPU time Optimize Time q LOQO IRWLS LOQO IRWLS LOQO IRWLS LOQO IRWLS 20 21.25 20.70 0.61 0.39 46.19 30.76 21.94 4.77 40 20.60 19.22 1.01 0.17 71.34 24.93 46.26 8.07 70 21.15 18.72 2.30 0.46 53.77 20.32 34.24 7.72 Table 2: CPU Time indicates the consume time in seconds for the whole procedure. The Optimize Time indicates the consume time in second for the LOQO or IRWLS procedure. one, containing 4781 training samples, needs most CPU resources to compute the RKHS and the second one, containing 2175 training samples, uses most CPU resources to solve the SVC for each Sw, where q indicates the size of the working set. The value of C has 2In what follows, I . I represents absolute value for numbers and cardinality for sets been set to 1 and 1000, respectively, and a Radial Basis Function (RBF) RKHS [2] has been employed, where its parameter a has been set, respectively, to 10 and 70. As it can be seen, the SV M1ight with IRWLS is significantly faster than the LOQO procedure in all cases. The kernel cache size has been set to 64Mb for both data sets and for both procedures. The results in Table 2 validates the IRWLS procedure as the fastest SVC solver. For the second trial, we have compiled a computer program that uses the IRWLS-SVC procedure and the working set selection in Section 3, we will refer to it as svcradit from now on. We have borrowed the chunking and shrinking ideas from the SV Mlight [6] for our computer program. To test these two programs several data sets have been used. The Adult and Web data sets have been obtained from 1. Platt's web page http://research.microsoft.comr jplatt/smo.html/; the Gauss-M data set is a two dimensional classification problem proposed in [3] to test neural networks, which comprises a gaussian random variable for each class, which highly overlap. The Banana, Diabetes and Splice data sets have been obtained from Gunnar Ratsch web page http://svm.first.gmd.der raetschl. The selection of C and the RKHS has been done as indicated in [11] for Adult and Web data sets and in http://svm.first.gmd.derraetschl for Banana, Diabetes and Splice data sets. In Table 3, we show the runtime complexity for each data set, where the value of q has been elected as the one that reduces the runtime complexity. Database Dim N C a SV q CPU time Sampl. radit light radit light Adult6 123 11221 1 10 4477 150 40 118.2 124.46 Adult9 123 32562 1 10 12181 130 70 1093.29 1097.09 Adult! 123 1605 1000 10 630 100 10 25.98 113.54 Web 1 300 2477 5 10 224 100 10 2.42 2.36 Web7 300 24693 5 10 1444 150 10 158.13 124.57 Gauss-M 2 4000 1 1 1736 70 10 12.69 48.28 Gauss-M 2 4000 100 1 1516 100 10 61.68 3053.20 Banana 2 400 316.2 1 80 40 70 0.33 0.77 Banana 2 4900 316.2 1 1084 70 40 22.46 1786.56 Diabetes 8 768 10 2 409 40 10 2.41 6.04 Splice 69 2175 1000 70 525 150 20 14.06 49.19 Table 3: Several data sets runtime complexity, when solved with the svcradit , radit for short, and SV Mlight, light for short. One can appreciate that the svcradit is faster than the SV M1ight for most data sets. For the Web data set, which is the only data set the SV Mlight is sligthly faster, the value of C is low and most training samples end up as support vector with (3i < C. In such cases the best strategy is to take the largest step towards the solution in every iteration, as the SV Mlight does [6], because most training samples (3i will not be affected by the others training samples (3j value. But in those case the value of C increases the SV c radit samples selection strategy is a much more appropriate strategy than the one used in SV Mlight. 5 CONCLUSIONS In this communication a new algorithm for solving the SVC for large training data sets has been presented. Its two major contributions deal with the optimizing engine and the sample selection strategy. An IRWLS procedure is used to solve the SVC in each step, which is much faster that the usual QP procedure, and simpler to implement, because the most difficult step is the linear equation system solution that can be easily obtained by LU decomposition means [12]. The random working set selection from the samples not fulfilling the KKT conditions is the best option if the working is be large, because it reduces the number of chunks to be solved. This strategy benefits from the IRWLS procedure, which allows to work with large training data set. All these modifications have been concreted in the svcradit solving procedure, publicly available at http://svm.tsc.uc3m.es/. 6 ACKNOWLEDGEMENTS We are sincerely grateful to Thorsten Joachims who has allowed and encouraged us to use his SV Mlight to test our IRWLS procedure, comparisons which could not have been properly done otherwise. References [1] B. E. Boser, I. M. Guyon, and V. Vapnik. A training algorithm for optimal margin classifiers. In 5th Annual Workshop on Computational Learning Theory, Pittsburg, U.S.A., 1992. [2] C. J. C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):121-167, 1998. [3] S. Haykin. Neural Networks: A comprehensivefoundation. Prentice-Hall, 1994. [4] P. W. Holland and R. E. Welch. Robust regression using iterative re-weighted least squares. Communications of Statistics Theory Methods, A6(9):813-27, 1977. [5] T. Joachims. http://www-ai.infonnatik.uni-dortmund.de/forschung/verfahren Isvmlight Isvmlight.eng.html. Technical report, University of Dortmund, Informatik, AI-Unit Collaborative Research Center on 'Complexity Reduction in Multivariate Data', 1998. [6] T. Joachims. Making Large Scale SVM Learning Practical, In Advances in Kernel Methods- Support Vector Learning, Editors SchOlkopf, B., Burges, C. 1. C. and Smola, A. 1., pages 169-184. M.I.T. Press, 1999. [7] E. Osuna, R. Freund, and F. Girosi. An improved training algorithm for support vector machines. In Proc. of the 1997 IEEE Workshop on Neural Networks for Signal Processing, pages 276-285, Amelia Island, U.S.A, 1997. [8] E. Osuna and F. Girosi. Reducing the run-time complexity of support vector machines. In ICPR'98, Brisbane, Australia, August 1998. [9] F. Perez-Cruz, A. Navia-Vazquez" P. L. Alarcon-Diana, and A. Artes-Rodriguez. An irwls proceure for svr. In the Proceedings of the EUSIPCO'OO, Tampere, Finland, 9 2000. [10] F. Perez-Cruz, A. N avia-Vazquez, J. L. Rojo-Alvarez, and A. Artes-Rodriguez. A new training algorithm for support vector machines. In Proceedings of the Fifth Bayona Workshop on Emerging Technologies in Telecommunications, volume 1, pages 116120, Baiona, Spain, 91999. [11] 1. C. Platt. Sequential Minimal Optimization: A Fast Algorithm for Training Suppor Vector Machines, In Advances in Kernel Methods- Support Vector Learning, Editors SchOlkopf, B., Burges, C. J. C. and Smola, A. J., pages 185-208. M.I.T. Press, 1999. [12] w. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes in C. Cambridge University Press, Cambridge, UK, 2 edition, 1994. [13] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, 1995. [14] V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, 1998.
2000
61
1,863
Some new bounds on the generalization error of combined classifiers Vladimir Koltchinskii Department of Mathematics and Statistics University of New Mexico Albuquerque, NM 87131-1141 vlad@math.unm.edu Dmitriy Panchenko Department of Mathematics and Statistics University of New Mexico Albuquerque, NM 87131-1141 panchenk@math.unm.edu Fernando Lozano Department of Electrical and Computer Engineering University of New Mexico Albuquerque, NM 87131 flozano@eece.unm.edu Abstract In this paper we develop the method of bounding the generalization error of a classifier in terms of its margin distribution which was introduced in the recent papers of Bartlett and Schapire, Freund, Bartlett and Lee. The theory of Gaussian and empirical processes allow us to prove the margin type inequalities for the most general functional classes, the complexity of the class being measured via the so called Gaussian complexity functions. As a simple application of our results, we obtain the bounds of Schapire, Freund, Bartlett and Lee for the generalization error of boosting. We also substantially improve the results of Bartlett on bounding the generalization error of neural networks in terms of h -norms of the weights of neurons. Furthermore, under additional assumptions on the complexity of the class of hypotheses we provide some tighter bounds, which in the case of boosting improve the results of Schapire, Freund, Bartlett and Lee. 1 Introduction and margin type inequalities for general functional classes Let (X, Y) be a random couple, where X is an instance in a space Sand Y E {-I, I} is a label. Let 9 be a set of functions from S into JR. For 9 E g, sign(g(X)) will be used as a predictor (a classifier) of the unknown label Y. If the distribution of (X, Y) is unknown, then the choice of the predictor is based on the training data (Xl, Yl ), ... , (Xn, Yn) that consists ofn i.i.d. copies of (X, Y). The goal ofleaming is to find a predictor 9 E 9 (based on the training data) whose generalization (classification) error JP'{Yg(X) :::; O} is small enough. We will first introduce some probabilistic bounds for general functional classes and then give several examples of their applications to bounding the generalization error of boosting and neural networks. We omit all the proofs and refer an interested reader to [5]. Let (8, A, P) be a probability space and let F be a class of measurable functions from (8, A) into lR. Let {Xd be a sequence of i.i.d. random variables taking values in (8, A) with common distribution P. Let Pn be the empirical measure based on the sample (Xl,'" ,Xn), Pn := n-l E~=l c5x " where c5x denotes the probability distribution concentrated at the point x. We will denote P! := Is !dP, Pn! := Is !dPn, etc. In what follows, £OO(F) denotes the Banach space of uniformly bounded real valued functions on F with the norm IIYII.:F := sUPfE.:F 1Y(f)I, Y E £OO(F). Define n n where {gi} is a sequence of i.i.d. standard normal random variables, independent of {Xi}' We will call n t-+ Gn(F) the Gaussian complexity function of the class F. One can find in the literature (see, e.g. [11]) various upper bounds on such quantities as Gn (F) in terms of entropies, VC-dimensions, etc. We give below a bound in terms of margin cost functions (compare to [6, 7]) and Gaussian complexities. Let <P = {CPk : IR -+ 1R}~1 be a class of Lipschitz functions such that (1 + sgn( -x) )/2 :-::; CPk(X) for all x E IR and all k. For each cP E <P, L(cp) will denote it's Lipschitz constant. Theorem 1 For all t > 0, lP'{ =3/ E F: P{! :-::; O} > + Let us consider a special family of cost functions. Assume that cP is a fixed non increasing Lipschitz function from IR into IR such that cp(x) 2: (1 + sgn( -x)) /2 for all x E lR. One can easily observe that L( cpU 15)) :-::; L( cP )15- 1 . Applying Theorem 1 to the class of Lipschitz functions <P := {cpU 15k) : k 2: O}, where 15k := 2-k, we get the following result. Theorem 2 For all t > 0, lP'{3! E F: P{! :-::; O} > inf [Pncp(L) + 2y'2irL(cp) Gn(F) aE[O,l] 15 15 + cogIOg~(2c5-l)r/2] + t:n2 }:-::; 2exp{-2t2}. In [5] an example was given which shows that, in general, the order of the factor 15- 1 in the second term of the bound can not be improved. Given a metric space (T, d), we denote Hd(Tj c:) the c:-entropy of T with respect to d, i.e. Hd(Tj c:) := log Nd(Tj c:), where Nd(Tj c:) is the minimal number of balls of radius c: covering T. The next theorem improves the previous results under some additional assumptions on the growth of random entropies Hdp 2 (Fj .). Define for "( E (0,1] n, and 8n (-yjf):= sup { 15 E (0,1): c5"fPn {/:-::; c5}:-::; n-1+!}. We call c5n ("(j f) and 8n ("(j f), respectively, the ,,(-margin and the empirical ,,(-margin of f. Theorem 3 Suppose that for some a E (0,2) and for some constant D > 0 Hdpn ,2 (Fj u) ~ Du-a , u > 0 a.s. (1) Then for any "( ~ 2~a ,for some constants A, B > 0 andfor all large enough n JIItv'f E F: A-18n("(jJ) ~ 8nbjf) ~ A8nbjJ)} ~ 1 - B(log210g2 n) exp { -n t /2}. This implies that with high probability for all f E F P{f ~ O} ~ c(nl -'Y/28n bj f)'Y)-I. The bound of Theorem 2 corresponds to the case of "( = 1. It is easy to see from the definitions of "(-margins that the quantity (nl -'Y/28n bj f)'Y)-1 increases in "( E (0,1]. This shows that the bound in the case of "( < 1 is tighter. Further discussion of this type of bounds and their experimental study in the case of convex combinations of simple classifiers is given in the next section. 2 Bounding the generalization error of convex combinations of classifiers Recently, several authors ([1, 8]) suggested a new class of upper bounds on generalization error that are expressed in terms of the empirical distribution of the margin of the predictor (the classifier), The margin is defined as the product Y g(X). The bounds in question are especially useful in the case of the classifiers that are the combinations of simpler classifiers (that belong, say, to a class 1-£). One of the examples of such classifiers is provided by the classifiers obtained by boosting [3, 4], bagging [2] and other voting methods of combining the classifiers. We will now demonstrate how our general results can be applied to the case of convex combinations of simple base classifiers. We assume that S := 8x {-1, 1} andF:= {]: f E F}, where j(x,y) := yf(x). Pwill denote the distribution of (X, Y), Pn the empirical distribution based on the observations ((Xl, YI ), ... , (Xn, Yn)) . It is easy to see that Gn(F) = Gn(F). One can easily see that if F := conv(1-£), where 1-£ is a class of base classifiers, then Gn(F) = Gn(1-£). These easy observations allow us to obtain useful bounds for boosting and other methods of combining the classifiers. For instance, we get in this case the following theorem that implies the bound of Schapire, Freund, Bartlett and Lee [8] when 1£ is a VC-class of sets. Theorem 4 Let F := conv(1£), where 1-£ is a class of measurable functions from (8, A) into R For all t > 0, lP'{ 3f E F : P{yf(x) ~ O} In particular, if 1-£ is a VC--class of classifiers h : 8 H {-1, 1} (which means that the class of sets {{x: h(x) = +1} : h E 1-£} is a Vapnik-Chervonenkis class) with VC--dimension V(1-£), we have with some constant C > 0, Gn(1-£) ~ C(V(1-£)/n)I/2. This implies that with probability at least 1 - a P{yf(x) ~ O} ~ inf [Pn{yf(x) ~ 8} + ~ JV(1-£) + OE(O,I] u n ( log log2 (28-1)) 1/2] V! log ~ + 2 + + r,;; , n yn which slightly improves the bound obtained previously by Schapire, Freund, Bartlett and Lee [8]. Theorem 3 provides some improvement of the above bounds on generalization error of convex combinations of base classifiers. To be specific, consider the case when H is a VC-class of classifiers. Let V := V(H) be its VC-dimension. A well known bound (going back to Dudley) on the entropy of the convex hull (see [11], p. 142) implies that 2 ( V - l) Hdpn,2(conv(H);u)::; sup HdQ,2(conv(H);u)::; Du-- v . QEP(S) It immediately follows from Theorem 3 that for all 'Y 2: 2J~::::~) and for some constants C,B IF'{3f E conv(H): p{f::; a} > ? }::; BIog210g2nexP{--21nt}, n1- 'Y/28n (,,(; f) 'Y where 8n("(; f) := sup{ 8 E (0,1) : 8'Y Pn {(x, y) : yf(x) ::; 8} ::; n-1+t }. This shows that in the case when the VC-dimension of the base is relatively small the generalization error of boosting and some other convex combinations of simple classifiers obtained by various versions of voting methods becomes better than it was suggested by the bounds of Schapire, Freund, Bartlett and Lee. One can also conjecture that the remarkable generalization ability of these methods observed in numerous experiments can be related to the fact that the combined classifier belongs to a subset of the convex hull for which the random entropy Hdp 2 is much smaller than for the whole convex hull (see [9, 10] for improved margin type bounds in a much more special setting). To demonstrate the improvement provided by our bounds over previous results, we show some experimental evidence obtained for a simple artificially generated problem, for which we are able to compute exactly the generalization error as well as the 'Y-margins. We consider the problem of learning a classifier consisting of the indicator function of the union of a finite number of intervals in the input space S = [0,1]. We used the Adaboost algorithm [4] to find a combined classifier using as base class 11. = {[a, b] : b E [0, In u {[b,l] : b E [0, In (i.e. decision stumps). Notice that in this case V = 2, and according to the theory values of gamma in (2/3, 1) should result in tighter bounds on the generalization error. For our experiments we used a target function with 10 equally spaced intervals, and a sample size of 1000, generated according to the uniform distribution in [0, 1]. We ran Adaboost for 500 rounds, and computed at each round the generalization error of the combined classifier and the bound C(n1- 'Y/28n("(; f) 'Y)-1 for different values of 'Y. We set the constant C to one. In figure 1 we plot the generalization error and the bounds for 'Y = 1, 0.8 and 2/3. As expected, for'Y = 1 (which corresponds roughly to the bounds in [8]) the bound is very loose, and as 'Y decreases, the bound gets closer to the generalization error. In figure 2 we show that by reducing further the value of 'Y we get a curve even closer to the actual generalization error (although for 'Y = 0.2 we do not get an upper bound). This seems to support the conjecture that Adaboost generates combined classifiers that belong to a subset of of the convex hull of 11. with a smaller random entropy. In figure 3 we plot the ratio 8-;"("(; f)/8n("(; f) for'Y = 0.4,2/3 and 0.8 against the boosting iteration. We can see that the ratio is close to one in all the examples indicating that the value of the constant A in theorem 3 is close to one in this case. · - - - - - ---boosbnground Figure 1: Comparison of the generalization error (thicker line) with (nl-'Y/28n b; f)'Y)-l for'Y = 1,0.8 and 2/3 (thinner lines, top to bottom). boostlrQround Figure 2: Comparison of the generalization error (thicker line) with (nl-'Y/28n b; f)'Y)-l for'Y = 0.5,0.4 and 0.2 (thinner lines, top to bottom). :I f • • • • • • I o • _ _ _ _ _ _ _ _ _ :1 ! 11 11 11 I. " '::'ffl • I o • _ _ _ _ _ _ _ _ _ ~I I i III II III Jam, ".~ ., I o • _ _ _ _ _ _ _ Figure 3: Ratio 8:b;f)/8nb;f) versus boosting round for'Y = 0.4,2/3,0.8 (top to bottom) 3 Bounding the generalization error in neural network learning We turn now to the applications of the bounds of previous section in neural network learning. Let 1i be a class of measurable functions from (8, A) into R Given a sigmoid U from lR into [-l,l]andavectorw := (Wl, ... ,Wn) E lRn, let Nu,w(Ul, . .. ,Un) := u(~~=l WjUj). We call the function Nu,w a neuron with weights wand sigmoid u. For wE lRn, [[w[[t l := ~~=l [Wit. Let Uj : j ~ 1 be functions from lR into [-1,1], satisfying the Lipschitz conditions: [Uj(u) - Uj(v)[ :"S Lj[u - vi, u,v E R Let {Aj} be a sequence of positive numbers. We define recursively classes of neural networks with restrictions on the weights of neurons (j below is the number of layers): 1lo =1i, 1lj(Al , ... ,Aj ):= := {Nuj,w(hl , ... , hn) : n ~ 0, hi E 1ij-l (Al'"'' Aj-d, wE lRn, [[w[[t l :"S Aj} U U 1ij-l (Al , .. . , Aj- l ). Theorem 5 For all t > 0 and for alll ~ 1 _ . j 1 I lP'{ =V E 1l1(Al , ... ,AI) : P{J :"S O} > lOf [Pn(fJh-) + ~ II (2LjAj + l)Gn(1l)+ <lE(O,l] U U k=l ( loglog2(28-l ))l/2] t+2} 2 {2 2} + +-< exp-t n ..;n Remark. Bartlett [1] obtained a similar bound for a more special class 1l and with larger constants. In the case when Aj == A, Lj == L (the case considered by Bartlett) the expression in the right hand side of his bound includes (AL)I;;+l )/2, which is replaced in our bound by (Af)l. These improvement can be substantial in applications, since the above quantities play the role of complexity penalties. Finally, it is worth mentioning that the theorems of Section 1 can be applied also to bounding the generalization error in multi-class problems. Namely, we assume that the labels take values in a finite set Y with card(Y) =: L. Consider a class j: of functions from S := 8 x Y into lR. A function f E j: predicts a label y E Y for an example x E 8 iff f(x,y) > maxf(x,y'). y'#y The margin of an example (x, y) is defined as mj(x,y) := f(x,y) - maxf(x,y'), y'#y so f misclassifies the example (x, y) iff mj(x, y) :"S O. Let F:= {J(.,y): y E Y,f E j:}. The next result follows from Theorem 2. Theorem 6 For all t > 0, lP'{3f E j:: P{mj :"S O} > inf [Pn{mj:"S 8} + 4y'27rL~2L - 1) Gn(F)+ <lE(O,l] ( 10glog2(28-l ))l/2] t+2} 2 {22} + +-< exp-t. n ..;n References [1] Bartlett, P. (1998) The Sample Complexity of Pattern Classification with Neural Networks: The Size of the Weights is More Important than the Size of the Network. IEEE Transactions on Information Theory, 44, 525-536. [2] Breiman, L. (1996). Bagging Predictors. Machine Learning, 26(2), 123-140. [3] Freund y. (1995) Boosting a weak learning algorithm by majority. Information and Computation, 121 ,2,256-285. [4] Freund Y. and Schapire, R.E. (1997) A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1),119-139. [5] Koltchinskii, V. and Panchenko, D. (2000) Empirical margin distributions and bounding the generalization error of combined classifiers, preprint. [6] Mason, L., Bartlett, P. and Baxter, J. (1999) Improved Generalization through Explicit Optimization of Margins. Machine Learning, 0, 1-11. [7] Mason, L., Baxter, J., Bartlett, P. and Frean, M. (1999) Functional Gradient Techniques for Combining Hypotheses. In: Advances in Large Margin Classifiers. Smola, Bartlett, Sch61kopf and Schnurmans (Eds), to appear. [8] Schapire, R., Freund, Y., Bartlett, P. and Lee, W. S. (1998) Boosting the Margin: A New Explanation of Effectiveness of Voting Methods. Ann. Statist., 26, 1651-1687. [9] Shawe-Taylor,1. and Cristianini, N. (1999) Margin Distribution Bounds on Generalization. In: Lecture Notes in Artificial Intelligence, 1572. Computational Learning Theory, 4th European Conference, EuroCOLT'99, 263-273. [10] Shawe-Taylor, J. and Cristianini, N. (1999) Further Results on the Margin Distribution. Proc. of COLT'99, 278-285. [11] van der Vaart, A. and Wellner, 1. (1996) Weak convergence and Empirical Processes. With Applications to Statistics. Springer-Verlag, New York.
2000
62
1,864
Beyond maximum likelihood and density estimation: A sample-based criterion for unsupervised learning of complex models Sepp Hochreiter and Michael C. Mozer Department of Computer Science University of Colorado Boulder, CO 80309- 0430 {hochreit,mozer}~cs.colorado.edu Abstract The goal of many unsupervised learning procedures is to bring two probability distributions into alignment. Generative models such as Gaussian mixtures and Boltzmann machines can be cast in this light, as can recoding models such as ICA and projection pursuit. We propose a novel sample-based error measure for these classes of models, which applies even in situations where maximum likelihood (ML) and probability density estimation-based formulations cannot be applied, e.g., models that are nonlinear or have intractable posteriors. Furthermore, our sample-based error measure avoids the difficulties of approximating a density function. We prove that with an unconstrained model, (1) our approach converges on the correct solution as the number of samples goes to infinity, and (2) the expected solution of our approach in the generative framework is the ML solution. Finally, we evaluate our approach via simulations of linear and nonlinear models on mixture of Gaussians and ICA problems. The experiments show the broad applicability and generality of our approach. 1 Introduction Many unsupervised learning procedures can be viewed as trying to bring two probability distributions into alignment. Two well known classes of unsupervised procedures that can be cast in this manner are generative and recoding models. In a generative unsupervised framework, the environment generates training exampleswhich we will refer to as observations-by sampling from one distribution; the other distribution is embodied in the model. Examples of generative frameworks are mixtures of Gaussians (MoG) [2], factor analysis [4], and Boltzmann machines [8]. In the recoding unsupervised framework, the model transforms points from an observation space to an output space, and the output distribution is compared either to a reference distribution or to a distribution derived from the output distribution. An example is independent component analysis (leA) [11], a method that discovers a representation of vector-valued observations in which the statistical dependence among the vector elements in the output space is minimized. With ICA, the model demixes observation vectors and the output distribution is compared against a factorial distribution which is derived either from assumptions about the distribution (e.g., supergaussian) or from a factorization of the output distribution. Other examples within the recoding framework are projection methods such as projection pursuit (e.g., [14]) and principal component analysis. In each case we have described for the unsupervised learning of a model, the objective is to bring two probability distributions- one or both of which is produced by the model- into alignment. To improve the model, we need to define a measure of the discrepancy between the two distributions, and to know how the model parameters influence the discrepancy. One natural approach is to use outputs from the model to construct a probability density estimator (PD£). The primary disadvantage of such an approach is that the accuracy of the learning procedure depends highly on the quality of the PDE. PDEs face the bias-variance trade-off. For the learning of generative models, maximum likelihood (ML) is a popular approach that avoids PDEs. In an ML approach, the model's generative distribution is expressed analytically, which makes it straightforward to evaluate the posterior, p(data I model), and therefore, to adjust the model parameters to maximize the likelihood of the data being generated by the model. This limits the ML approach to models that have tractable posteriors, true only of the simplest models [1, 6, 9]. We describe an approach which, like ML, avoids the construction of an explicit PDE, yet does so without requiring an analytic expression for the posterior. Our approach, which we call a sample-based method, assumes a set of samples from each distribution and proposes an error measure of the disagreement defined directly in terms of the samples. Thus, a second set of samples drawn from the model serves in place of a PDE or an analytic expression of the model's density. The sample-based method is inspired by the theory of electric fields, which describes the interactions among charged particles. For more details on the metaphor, see [10]. In this paper, we prove that our approach converges to the optimal solution as the sample size goes to infinity, assuming an unconstrained (maximally flexible) model. We also prove that the expected solution of our approach is the ML solution in a generative context. We present empirical results showing that the sample-based approach works for both linear and nonlinear models. 2 The Method Consider a model to be learned, fw, parameterized by weights w. The model maps an input vector, zi, indexed by i, to an output vector xi = fw(zi). The model inputs are sampled from a distribution pz(.), and the learning procedure calls for adjusting the model such that the output distribution, Px (.), comes to match a target distribution, py(.). For unsupervised recoding models, zi is an observation, xi is the transformed representation of zi, and Py (.) specifies the desired code properties. For unsupervised generative models, pz(.) is fixed and py(.) is the distribution of observations. The Sample-based Method: The Intuitive Story Assume that we have data points sampled from two different distributions, labeled "- " and "+" (Figure 1). The sample-based error measure specifies how samples should be moved so that the two distributions are brought into alignment. In the figure, samples from the lower left and upper right corners must be moved to the upper left and lower right corners. Our goal is to establish an explicit correspondence between each "- " sample and each "+" sample. Toward this end, our samplebased method utilizes on mass interactions among the samples, by introducing a repelling force between samFigure 1 pIes from the same distribution and an attractive force between samples from different distributions, and allowing the samples to move according to these forces. The Sample-based Method: The Formal Presentation In conceiving of the problem in terms of samples that attract and repel one another, it is natural to think in terms of physical interactions among charged particles. Consider a set of positively charged particles at locations denoted by xi, i = L.Nx, and a set of negatively charged particles at locations denoted by yi, j = L .N y . The particles correspond to data samples from two distributions. The interaction among particles is characterized by the Coulomb energy, E: where r(a, b) is a distance measure- Green's function- which results in nearby particles having a strong influence on the energy, but distant particles having only a weak influence. Green's function is defined as r(a, b) = c(d) Ilia - bll d - 2 , where d is the dimensionality of the space, c( d) is a constant only depending on d, and 11.11 denotes the Euclidean distance. For d = 2, r(a, b) = k In (ila - biD. The Coulomb energy is low when negative and positive particles are near one another, positive particles are far from one another, and negative particles are far from one another. This is exactly the state we would like to achieve for our two distributions of samples: bringing the two distributions into alignment without collapsing either distribution into a trivial form. Consequently, our sample-based method proposes using the Coulomb energy as an objective function to be minimized. The gradient of E with respect to a sample's location is readily computed (it is the force acting on that sample), and this gradient can be chained with the Jacobian of the location with respect to the model parameters w to obtain a gradient-based update rule: ll.w = -to "V wE = -to C~z~~~l(~~)T "Vxk<P(Xk ) JY~~~l(WG)T "Vyk<p(yk)),whereEiSa step size, <p(a) := N;l ~~1 r(a,xi) N:;;l ~~1 r(a,yi) is the potential with N;;l "Va<p(a) = "VaE, T is the transposition and a = xk or yk. Here axklaw is the Jacobian of fw(zk) and the time derivative of xk is:i;k = iw(zk) = -"V<p(xk). If yk depends on w then yk- notation is analogous else ayk law is the zero matrix. There turns out to be an advantage to using Green's function as the particle interactions basis over other possibilities, e.g., a Gaussian function (e.g., [12, 13,3]). The advantage stems from the fact that with Green's function, the force between two nearby points goes to infinity as the points are pushed together, whereas with the Gaussian, the force goes to zero. Consequently, without Green's function, one might expect local optima in which clusters of points collapse onto a single location. Empirically, simulations confirmed this conjecture. Proof: Correctness of the Update Rule As the numbers of samples N x and Ny go to infinity, <P can be expressed as <p(a) = J p(b) r(a,b) db, where p(b) := Px(b) - py(b). Our sample-based method moves data points, but by moving data points, the method implicitly alters the probability density which gave rise to the data. The relation between the movement of data points and the change in the density can be expressed using an operator from vector analysis, the divergence. The divergence at a location a is the number of data points moving out of a volume surrounding a minus the number of data points moving in to the same volume. Thus, the negative divergence of movements at a gives the density change at a. The movement of data points is given by -V<p(a). We get p(a) = px(a) py(a) = -div(-V<p(a)). For Cartesian (orthogonal) coordinates the divergence div of a vector field V at a is defined as div (V (a)) := E1=1 8Vt (a) /8al. The Laplace operator f::::. of a scalar function A is defined as f::::.A(a) := div (VA(a)) = E1=1 8 2 A(a)/8a;' The Laplace operator allows an important characterization of Green's function: f::::.af(a, b) = -8(a - b), where 8 is the Dirac delta function. This characterization gives f::::.<p(a) = -p(a). p(a) = J.L(a) div(V<p(a)) = J.L(a) f::::.<p(a) = -J.L(a) p(a) , J.L(a):2: J.Lo > 0, where J.L(a) gives the effectiveness ofthe algorithm in moving a sample at a. We get p(a, t) = p(a,O) exp( -J.L(a) t). For the integrated squared error (ISE) of the two distributions we obtain ISE(t) = J (p(a,t))2da ~ exp(-J.LO t) J (p(a,0))2da = exp(-J.LO t) ISE(O) , where ISE(O) is independent of t. Thus, the ISE between the two distributions is guaranteed to decrease during learning, when the sample size goes to infinity. Proof: Expected Generative Solution is ML Solution In the case of a generative model which has no constraints (i.e., can model any distribution), the maximum likelihood solution will have distribution px(a) = .Jy Ef':18(yi - a), i.e., the model will produce only the observations and all of them with equal probability. For this case, we show that our sample-based method will yield the same solution in expectation as ML. The sample-based method converges to a local minimum of the energy, where (Va<p(a)}x = 0 for all a, where Ox is the expectation over model output. Equivalently, (Vaf(a,x)}x - .Jy Ef,:l Vaf (a,yi) = 0 or J 1 Ny . (Vaf(a,x)}x = px(x)Vaf(a,x) dx = 7VLVaf(a,yJ) y i=l Because this equation holds for all a, we obtain px(a) = .Jy Ef':l 8(yi - a), which is the ML solution. Thus, the sample-based method can be viewed as an approximation to ML which gets more exact as the number of samples goes to infinity. 3 Experiments We illustrate the sample-based approach for two common unsupervised learning problems: MoG and ICA. In both cases, we demonstrate that the sample-based approach works in the linear case. We also consider a nonlinear case to illustrate the power of the sample-based approach. Mixture of Gaussians In this generative model framework, m denotes a mixture component which is chosen with probability Vm from M components, and has associated model parameters Wm = (Om, Mm). In the standard MoG model, given a choice of component m, the (linear) model output is obtained by Xi = fW m (zi) = Om zi + Mm, where zi is drawn from the Gaussian distribution with zero mean and identity covariance matrix. For a nonlinear mixture model, we used a 3-layer sigmoidal neural network for fW m (zi). An update rule for Vm can be derived for our approach: ~vm = ""Nz ( i) T 8z' . i h . t . d ""M 1 . £ d -Ev wi=l Z 7fi' x , were Ev IS a s ep sIze an wm=1 Vm IS en orce . We trained a linear MoG model with the standard expected maximization (EM) algorithm (using code from [5]) and a linear and a nonlinear MoG with our samplebased approach. A fixed training set of Ny = 100 samples was used for all models, and all models had M = 10 except one nonlinear model which had M = 1. In the sample-based approach, we generated 100 samples from our model (the Xi) following every training epoch. The nonlinear model was trained with backpropagation. Figure 2 shows the results. The linear ML model is better than the sample-based modeL That is not surprising because ML computes the model probability values analytically (the posterior is tractable) and our algorithm uses only samples to approximate the model probability values. We used only 100 model samples in each epoch and the linear sample-based model found an acceptable solution and is not much worse than the ML modeL The nonlinear models fit better the true ring-like distribution and do not suffer from sharp corners and edges. ,---.----.--:::c~---, ML (10 -linear) Independent COInponent Analysis Figure 2: (upper panel, left to right) training samples chosen from a ring density, a larger sample from this density, the solutions obtained from the linear model trained with EM; (lower panels) models trained with the sample-based method (left to right): linear model, nonlinear model, nonlinear model with one component. With a recoding model we tried to demix sub gaussian source distributions where each has supergaussian modes. Most ICA methods are not able to demix subgaussian sources. Figure 3 shows the results, which are nearly perfect. The ideal result is a scaled and permuted identity matrix when the mixing and demixing matrices are multiplied. For more details see [10]. Figure 3: For a three-dimensional linear mixture projections of sources (first row), mixtures (second row), and sources recovered by our approach (third row) on a twodimensional plane are shown. The demixing matrix multiplied with the mixing matrix yields: -0.0017 0.0010 -0.0014 0.1850 -0.1755 0.0003 0.2523 -0.0101 0.0053 In a second experiment, we tried to recover sources from two nonlinear mixings. This problem is impossible for standard rcA methods because they are designed for linear mixings. The result is shown in Figure 4. An exact demixing cannot be expected, because nonlinear ICA has no unique solution. For more details see [10]. Sources Mixtures Recovered Sources 4 Discussion Figure 4: For two two-dimensional nonlinear mixing functionsupper row, (z + a)2, and lower row, Jz + a, with complex variable z- the sources, mixtures, and recovered sources. The mixing function is not completely inverted but the sources are recovered recognizable. Although our sample-based approach is intuitively straightforward, its implementation has two drawbacks: (1) One has to be cautious of samples that are close together, because they lead to unbounded gradients; and (2) all samples must be considered when computing the force on a data point, which makes the approach computation intensive. However, in [10, 7] approximations are proposed that reduce the computational complexity of the approach. In this paper, we have presented simulations showing the generality and power of our sample-based approach to unsupervised learning problems, and have also proven two important properties of the approach: (1) With certain assumptions, the approach will find the correct solution. (2) With an unconstrained model, the expected solution of our approach is the ML solution. In conclusion, our samplebased approach can be applied to unsupervised learning of complex models where ML does not work and our method avoids the drawbacks of PDE approaches. Acknow ledgIllents We thank Geoffrey Hinton for inspirational suggestions regarding this work. The work was supported by the Deutsche Forschungsgemeinschajt (Ho 1749/1-1), McDonnell-Pew award 97-18, and NSF award IBN-9873492. References [1] P. Dayan, G. E. Hinton, R. M. Neal, and R. S. Zemel. The Helmholtz machine. Neural Computation, 7(5):889-904, 1995. [2] R. O. Duda and P. E. Hart. Pattern Classification and Scene Analysis. Wiley, 1973. [3] D. Erdogmus and J. C. Principe. Comparision of entropy and mean square error criteria in adaptive system training using higher order statistics. In P. Pajunen and J. Karhunen, editors, Proceedings of the Second International Workshop on Independent Component Analysis and Blind Signal Separation, Helsinki, Finland, pages 75-80. Otamedia, Espoo, Finland, ISBN: 951-22-5017-9, 2000. [4] B. S. Everitt. An introduction to latent variable models. Chapman and Hall, 1984. [5] Z. Ghahramani and G. E. Hinton. The EM algorithm for mixtures offactor analyzers. Technical Report CRG-TR-96-1, University of Toronto, Dept. ofComp. Science, 1996. [6] Z. Ghahramani and G. E. Hinton. Hierachical non-linear factor analysis and topographic maps. In M. I. Jordan, M. J. Kearns, and S. A. Solla, editors, Advances in Neural Information Processing Systems 10, pages 486- 492. MIT Press, 1998. [7] A. Gray and A. W. Moore. 'N-body' problems in statistical learning. In T. K. Leen, T. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems 13, 2001. In this proceeding. [8] G. E. Hinton and T. J. Sejnowski. Learning and relearning in Boltzmann machines. In Parallel Distributed Processing, volume 1, pages 282- 317. MIT Press, 1986. [9] G. E. Hinton and T. J. Sejnowski. Introduction. In G. E. Hinton and T. J. Sejnowski, editors, Unsupervised Learning: Foundations of Neural Computation, pages VII- XVI. The MIT Press, Cambridge, MA, London, England, 1999. [10] S. Hochreiter and M. C. Mozer. An electric field approach to independent component analysis. In P. Pajunen and J. Karhunen, editors, Proceedings of the Second International Workshop on Independent Component Analysis and Blind Signal Separation, Helsinki, Finland, pages 45- 50. Otamedia, Finland, ISBN: 951-22-5017-9, 2000. [11] A. Hyviirinen. Survey on independent component analysis. Neural Computing Surveys, 2:94- 128, 1999. [12] G. C. Marques and L. B. Almeida. Separation of nonlinear mixtures using pattern repulsion. In J.-F. Cardoso, C. Jutten, and P. Loubaton, editors, Proceedings of the First International Workshop on Independent Component Analysis and Signal Separation, Aussois, France, pages 277- 282, 1999. [13] J. C. Principe and D. Xu. Information-theoretic learning using Renyi's quadratic entropy. In J.-F. Cardoso, C. Jutten, and P. Loubaton, editors, Proceedings of the First International Workshop on Independent Component Analysis and Signal Separation, Aussois, France, pages 407-412, 1999. [14] Y. Zhao and C. G. Atkeson. Implementing projection pursuit learning. IEEE Transactions on Neural Networks, 7(2):362- 373, 1996.
2000
63
1,865
A Neural Probabilistic Language Model Yoshua Bengio; Rejean Ducharme and Pascal Vincent Departement d'Informatique et Recherche Operationnelle Centre de Recherche Mathematiques Universite de Montreal Montreal, Quebec, Canada, H3C 317 {bengioy,ducharme, vincentp }@iro.umontreal.ca Abstract A goal of statistical language modeling is to learn the joint probability function of sequences of words. This is intrinsically difficult because of the curse of dimensionality: we propose to fight it with its own weapons. In the proposed approach one learns simultaneously (1) a distributed representation for each word (i.e. a similarity between words) along with (2) the probability function for word sequences, expressed with these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar to words forming an already seen sentence. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach very significantly improves on a state-of-the-art trigram model. 1 Introduction A fundamental problem that makes language modeling and other learning problems difficult is the curse of dimensionality. It is particularly obvious in the case when one wants to model the joint distribution between many discrete random variables (such as words in a sentence, or discrete attributes in a data-mining task). For example, if one wants to model the joint distribution of 10 consecutive words in a natural language with a vocabulary V of size 100,000, there are potentially 10000010 - 1 = 1050 - 1 free parameters. A statistical model of language can be represented by the conditional probability of the next word given all the previous ones in the sequence, since P( W'[) = rri=l P( Wt Iwf-1), where Wt is the t-th word, and writing subsequence w[ = (Wi, Wi+1, ... , Wj-1, Wj). When building statistical models of natural language, one reduces the difficulty by taking advantage of word order, and the fact that temporally closer words in the word sequence are statistically more dependent. Thus, n-gram models construct tables of conditional probabilities for the next word, for each one of a large number of contexts, i.e. combinations of the last n - 1 words: p(wtlwf-1) ~ P(WtIW!=~+l)' Only those combinations of successive words that actually occur in the training corpus (or that occur frequently enough) are considered. What happens when a new combination of n words appears that was not seen in the training corpus? A simple answer is to look at the probability predicted using smaller context size, as done in back -off trigram models [7] or in smoothed (or interpolated) trigram models [6]. So, in such models, how is generalization basically obtained from sequences of "Y.B. was also with AT&T Research while doing this research. words seen in the training corpus to new sequences of words? simply by looking at a short enough context, i.e., the probability for a long sequence of words is obtained by "gluing" very short pieces of length 1, 2 or 3 words that have been seen frequently enough in the training data. Obviously there is much more information in the sequence that precedes the word to predict than just the identity of the previous couple of words. There are at least two obvious flaws in this approach (which however has turned out to be very difficult to beat): first it is not taking into account contexts farther than 1 or 2 words, second it is not taking account of the "similarity" between words. For example, having seen the sentence The cat i s wal king i n t he b e droom in the training corpus should help us generalize to make the sentence A dog was r unning in a room almost as likely, simply because "dog" and "cat" (resp. "the" and "a", "room" and "bedroom", etc ... ) have similar semantics and grammatical roles. 1.1 Fighting the Curse of Dimensionality with its Own Weapons In a nutshell, the idea of the proposed approach can be summarized as follows: 1. associate with each word in the vocabulary a distributed "feature vector" (a realvalued vector in ~m), thereby creating a notion of similarity between words, 2. express the joint probability function of word sequences in terms of the feature vectors of these words in the sequence, and 3. learn simultaneously the word feature vectors and the parameters of thatfitnction. The feature vector represents different aspects of a word: each word is associated with a point in a vector space. The number of features (e.g. m = 30,60 or 100 in the experiments) is much smaller than the size of the vocabulary. The probability function is expressed as a product of conditional probabilities of the next word given the previous ones, (e.g. using a multi-layer neural network in the experiment). This function has parameters that can be iteratively tuned in order to maximize the log-likelihood of the training data or a regularized criterion, e.g. by adding a weight decay penalty. The feature vectors associated with each word are learned, but they can be initialized using prior knowledge. Why does it work? In the previous example, if we knew that dog and cat played similar roles (semantically and syntactically), and similarly for (the ,a), (b edroom,room), (i s ,was), (runni ng,wa l k i ng), we could naturally generalize from The cat i s wa l k i ng i n t he b e droom to A dog was runn i ng i n a room and likewise to many other combinations. In the proposed model, it will so generalize because "similar" words should have a similar feature vector, and because the probability function is a smooth function of these feature values, so a small change in the features (to obtain similar words) induces a small change in the probability: seeing only one of the above sentences will increase the probability not only of that sentence but also of its combinatorial number of "neighbors" in sentence .Ipace (as represented by sequences of feature vectors). 1.2 Relation to Previous Work The idea of using neural networks to model high-dimensional discrete distributions has already been found useful in [3] where the joint probability of Zl ... Zn is decomposed as a product of conditional probabilities: P(Zl = Zl, " ', Zn = Zn) = Oi P(Zi = zilgi(Zi-l, Zi-2, ... , Zl)), where g(.) is a function represented by part of a neural network, and it yields parameters for expressing the distribution of Zi. Experiments on four VCI data sets show this approach to work comparatively very well [3, 2]. The idea of a distributed representation for symbols dates from the early days of connectionism [5]. More recently, Hinton's approach was improved and successfully demonstrated on learning several symbolic relations [9]. The idea of using neural networks for language modeling is not new either, e.g. [8]. In contrast, here we push this idea to a large scale, and concentrate on learning a statistical model of the distribution of word sequences, rather than learning the role of words in a sentence. The proposed approach is also related to previous proposals of character-based text compression using neural networks [11]. Learning a clustering of words [10, 1] is also a way to discover similarities between words. In the model proposed here, instead of characterizing the similarity with a discrete random or deterministic variable (which corresponds to a soft or hard partition of the set of words), we use a continuous real-vector for each word, i.e. a distributed feature vector, to indirectly represent similarity between words. The idea of using a vector-space representation for words has been well exploited in the area of information retrieval (for example see [12]), where vectorial feature vectors for words are learned on the basis of their probability of co-occurring in the same documents (Latent Semantic Indexing [4]). An important difference is that here we look for a representation for words that is helpful in representing compactly the probability distribution of word sequences from natural language text. Experiments indicate that learning jointly the representation (word features) and the model makes a big difference in performance. 2 The Proposed Model: two Architectures The training set is a sequence Wi ... WT of words Wt E V, where the vocabulary V is a large but finite set. The objective is to learn a good model f(wt,· . . , Wt-n) = P(wtlwi-i), in the sense that it gives high out-of-sample likelihood. In the experiments, we will report the geometric average of l/P(wtlwi-i), also known as perplexity, which is also the exponential of the average negative log-likelihood. The only constraint on the model is that for any choice of wi- i , Ei~i f(i, Wt-i, Wt-n) = 1. By the product of these conditional probabilities, one obtains a model of the joint probability of any sequence of words. The basic form of the model is described here. Refinements to speed it up and extend it will be described in the following sections. We decompose the function f (Wt, .. . , Wt-n) = P( Wt Iwi-i ) in two parts: 1. A mapping C from any element of V to a real vector C(i) E Rm . It represents the "distributed feature vector" associated with each word in the vocabulary. In practice, C is represented by a I V I x m matrix (of free parameters). 2. The probability function over words, expressed with C. We have considered two alternative formulations: (a) The direct architecture: a function 9 maps a sequence of feature vectors for words in context (C(Wt-n),·· · , C(wt-d) to a probability distribution over words in V. It is a vector function whose i-th element estimates the probability P(Wt = ilwi-i ) as in figure 1. f(i, Wt-i,·· · , Wt-n) = g(i, C(Wt-i),· ·· , C(Wt-n)). We used the "softmax" in the output layer of a neural net: P( Wt = ilwi-i ) = ehi / E j eh;, where hi is the neural network output score for word i. (b) The cycling architecture: a function h maps a sequence of feature vectors (C(Wt-n),···, C(Wt-i), C(i)) (i.e. including the context words and a candidate next word i) to a scalar hi, and again using a softmax, P(Wt = ilwi- i ) = ehi/Ejeh;. f(Wt,Wt-i,· · ·,Wt-n) = g(C(Wt), C(wt-d,··· ,C(Wt-n)). We call this architecture "cycling" because one repeatedly runs h (e.g. a neural net), each time putting in input the feature vector C(i) for a candidate next word i. The function f is a composition of these two mappings (C and g), with C being shared across all the words in the context. To each of these two parts are associated some parameters. The parameters of the mapping C are simply the feature vectors themselves (represented by a IVI x m matrix C whose row i is the feature vector C(i) for word i). The function 9 may be implemented by a feed-forward or recurrent neural network or another parameterized function, with parameters (). Table look-up inC index fOl w i-n i-ill output = P(Wt = i I eMtext) index fot W'-2 index fot Wt_l : computed only for WOlds in shOlt list Figure 1: "Direct Architecture": f(i, Wt-l, ·", Wt-n) = g(i, C(Wt-l),···, C(Wt-n)) where 9 is the neural network and C(i) is the i-th word feature vector. Training is achieved by looking for ((), C) that maximize the training corpus penalized loglikelihood: L = ~ ~t logpw. (C(Wt-n),···, C(Wt-l)j ()) + R((), C), where R((), C) is a regularization term (e.g. a weight decay ).11()11 2 , that penalizes slightly the norm of (). 3 Speeding-up and other Tricks Short list. The main idea is to focus the effort of the neural network on a "short list" of words that have the highest probability. This can save much computation because in both of the proposed architectures the time to compute the probability of the observed next word scales almost linearly with the number of words in the vocabulary (because the scores hi associated with each word i in the vocabulary must be computed for properly normalizing probabilities with the softmax). The idea of the speed-up trick is the following: instead of computing the actual probability of the next word, the neural network is used to compute the relative probability of the next word within that short list. The choice of the short list depends on the current context (the previous n words). We have used our smoothed trigram model to pre-compute a short list containing the most probable next words associated to the previous two words. The conditional probabilities P(Wt = ilht ) are thus computed as follows, denoting with ht the history (context) before Wt. and Lt the short list of words for the prediction of Wt. If i E Lt then the probability is PNN(Wt = ilWt E Lt, ht)Ptrigram(Wt E Ltlht ), else it is Ptrigram(Wt = ilht), where PNN(Wt = ilWt E Lt, ht) are the normalized scores of the words computed by the neural network, where the "softmax" is only normalized over the words in the short list Lt, and Ptrigram(Wt E Ltlht ) = ~iEL. Ptrigram(ilht), with Ptrigram(ilht) standing for the next-word probabilities computed by the smoothed trigram. Note that both L t and Ptrigram(Wt E Ltlht) can be pre-computed (and stored in a hash table indexed by the last two words). Table look-up for recognition. To speed up application of the trained model, one can pre-compute in a hash table the output of the neural network, at least for the most frequent input contexts. In that case, the neural network will only be rarely called upon, and the average computation time will be very small. Note that in a speech recognition system, one needs only compute the relative probabilities of the acoustically ambiguous words in each context, also reducing drastically the amount of computations. Stochastic gradient descent. Since we have millions of examples, it is important to converge within only a few passes through the data. For very large data sets, stochastic gradient descent convergence time seems to increase sub-linearly with the size of the data set (see experiments on Brown vs Hansard below). To speed up training using stochastic gradient descent, we have found it useful to break the corpus in paragraphs and to randomly permute them. In this way, some of the non-stationarity in the word stream is eliminated, yielding faster convergence. Capacity control. For the "smaller corpora" like Brown (1.2 million examples), we have found early stopping and weight decay useful to avoid over-fitting. For the larger corpora, our networks still under-fit. For the larger corpora, we have found double-precision computation to be very important to obtain good results. Mixture of models. We have found improved performance by combining the probability predictions of the neural network with those of the smoothed trigram, with weights that were conditional on the frequency of the context (same procedure used to combine trigram, bigram, and unigram in the smoothed trigram). Initialization of word feature vectors. We have tried both random initialization (uniform between -.01 and .01) and a "smarter" method based on a Singular Value Decomposition (SVD) of a very large matrix of "context features". These context features are formed by counting the frequency of occurrence of each word in each one of the most frequent contexts (word sequences) in the corpus. The idea is that "similar" words should occur with similar frequency in the same contexts. We used about 9000 most frequent contexts, and compressed these to 30 features with the SVD. Out-of-vocabulary words. For an out-of-vocabulary word Wt we need to come up with a feature vector in order to predict the words that follow, or predict its probability (that is only possible with the cycling architecture). We used as feature vector the weighted average feature vector of all the words in the short list, with the weights being the relative probabilities ofthose words: E[C(wt)lhtl = Ei C(i)P(wt = ilht). 4 Experimental Results Comparative experiments were performed on the Brown and Hansard corpora. The Brown corpus is a stream of 1,181,041 words (from a large variety of English texts and books). The first 800,000 words were used for training, the following 200,000 for validation (model selection, weight decay, early stopping) and the remaining 181,041 for testing. The number of different words is 47, 578 (including punctuation, distinguishing between upper and lower case, and including the syntactical marks used to separate texts and paragraphs). Rare words with frequency:::; 3 were merged into a single token, reducing the vocabulary size to IVI = 16,383. The Hansard corpus (Canadian parliament proceedings, French version) is a stream of about 34 million words, of which 32 millions (set A) was used for training, 1.1 million (set B) was used for validation, and 1.2 million (set C) was used for out-of-sample tests. The original data has 106, 936 different words, and those with frequency:::; 10 were merged into a single token, yielding IVI = 30,959 different words. The benchmark against which the neural network was compared is an interpolated or smoothed trigram model [6]. Let qt = l(Jreq(Wt-l,Wt-2)) represent the discretized frequency of occurrence of the context (Wt-l, Wt-2) (we used l(x) = r -log((l + x)/T)l where x is the frequency of occurrence of the context and T is the size of the training corpus). A conditional mixture of the trigram, bigram, unigram and zero-gram was learned on the validation set, with mixture weights conditional on discretized frequency. Below are measures of test set perplexity (geometric average of 1/ p( Wt Iwi-1 ) for different models P. Apparent convergence of the stochastic gradient descent procedure was obtained after around 10 epochs for Hansard and after about 50 epochs for Brown, with a learning rate gradually decreased from approximately 10-3 to 10-5 . Weight decay of 10-4 or 10-5 was used in all the experiments (based on a few experiments compared on the validation set). The main result is that the neural network performs much better than the smoothed trigram. On Brown the best neural network system, according to validation perplexity (among different architectures tried, see below) yielded a perplexity of 258, while the smoothed trigram yields a perplexity of 348, which is about 35% worse. This is obtained using a network with the direct architecture mixed with the trigram (conditional mixture), with 30 word features initialized with the SVD method, 40 hidden units, and n = 5 words of context. On Hansard, the corresponding figures are 44.8 for the neural network and 54.1 for the smoothed trigram, which is 20.7% worse. This is obtained with a network with the direct architecture, 100 randomly initialized words features, 120 hidden units, and n = 8 words of context. More context is useful. Experiments with the cycling architecture on Brown, with 30 word features, and 30 hidden units, varying the number of context words: n = 1 (like the bigram) yields a test perplexity of 302, n = 3 yields 291, n = 5 yields 281, n = 8 yields 279 (N.B. the smoothed trigram yields 348). Hidden units help. Experiments with the direct architecture on Brown (with direct input to output connections), with 30 word features, 5 words of context, varying the number of hidden units: 0 yields a test perplexity of 275, 10 yields 267, 20 yields 266, 40 yields 265, 80 yields 265. Learning the word features jointly is important. Experiments with the direct architecture on Brown (40 hidden units, 5 words of context), in which the word features initialized with the SVD method are kept fixed during training yield a test perplexity of 345.8 whereas if the word features are trained jointly with the rest of the parameters, the perplexity is 265. Initialization not so useful. Experiments on Brown with both architectures reveal that the SVD initialization of the word features does not bring much improvement with respect to random initialization: it speeds up initial convergence (saving about 2 epochs), and yields a perplexity improvement of less than 0.3%. Direct architecture works a bit better. The direct architecture was found about 2% better than the cycling architecture. Conditional mixture helps but even without it the neural net is better. On Brown, the best neural net without the mixture yields a test perplexity of 265, the smoothed trigram yields 348, and their conditional mixture yields 258 (i.e., better than both). On Hansard the improvement is less: a neural network yielding 46.7 perplexity, mixed with the trigram (54.1), yields a mixture with perplexity 45.1. 5 Conclusions and Proposed Extensions The experiments on two corpora, a medium one 0.2 million words), and a large one (34 million words) have shown that the proposed approach yields much better perplexity than a state-of-the-art method, the smoothed trigram, with differences on the order of 20% to 35%. We believe that the main reason for these improvements is that the proposed approach allows to take advantage of the learned distributed representation to fight the curse of dimensionality with its own weapons: each training sentence informs the model about a combinatorial number of other sentences. Note that if we had a separate feature vector for each "context" (short sequence of words), the model would have much more capacity (which could grow like that of n-grams) but it would not naturally generalize between the many different ways a word can be used. A more reasonable alternative would be to explore language units other than words (e.g. some short word sequences, or alternatively some sub-word morphemic units). There is probably much more to be done to improve the model, at the level of architecture, computational efficiency, and taking advantage of prior knowledge. An important priority of future research should be to evaluate and improve the speeding-up tricks proposed here, and find ways to increase capacity without increasing training time too much (to deal with corpora with hundreds of millions of words). A simple idea to take advantage of temporal structure and extend the size of the input window to include possibly a whole paragraph, without increasing too much the number of parameters, is to use a time-delay and possibly recurrent neural network. In such a multi-layered network the computation that has been performed for small groups of consecutive words does not need to be redone when the network input window is shifted. Similarly, one could use a recurrent network to capture potentially even longer term information about the subject of the text. A very important area in which the proposed model could be improved is in the use of prior linguistic knowledge: semantic (e.g. Word Net), syntactic (e.g. a tagger), and morphological (radix and morphemes). Looking at the word features learned by the model should help understand it and improve it. Finally, future research should establish how useful the proposed approach will be in applications to speech recognition, language translation, and information retrieval. Acknowledgments The authors would like to thank Leon Bottou and Yann Le Cun for useful discussions. This research was made possible by funding from the NSERC granting agency. References [1] D. Baker and A. McCallum. Distributional clustering of words for text classification. In SIGlR '98,1998. [2] S. Bengio and Y. Bengio. Taking on the curse of dimensionality in joint distributions using neural networks. IEEE Transactions on Neural Networks, special issue on Data Mining and Knowledge Discovery, 11(3):550-557, 2000. [3] Yoshua Bengio and Samy Bengio. Modeling high-dimensional discrete data with multi-layer neural networks. In S. A. Solla, T. K. Leen, and K-R. Mller, editors, Advances in Neural Information Processing Systems 12, pages 400--406. MIT Press, 2000. [4] S. Deerwester, S.T. Dumais, G.W. Furnas, T.K. Landauer, and R.Harshman. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6):391-407, 1990. [5] G.E. Hinton. Learning distributed representations of concepts. In Proceedings of the Eighth Annual Conference of the Cognitive Science Society, pages 1-12, Amherst 1986, 1986. Lawrence Erlbaum, Hillsdale. [6] F. Jelinek and R. L. Mercer. Interpolated estimation of Markov source parameters from sparse data. In E. S. Gelsema and L. N. Kanal, editors, Pattern Recognition in Practice. North-Holland, Amsterdam, 1980. [7] Slava M. Katz. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics, Speech, and Signal Processing, ASSP35(3):400-401, March 1987. [8] R. Miikkulainen and M.G. Dyer. Natural language processing with modular neural networks and distributed lexicon. Cognitive Science, 15:343- 399, 1991. [9] A. Paccanaro and G.E. Hinton. Extracting distributed representations of concepts and relations from positive and negative propositions. In Proceedings of the International Joint Conference on Neural Network, lJCNN'2000, Como, Italy, 2000. IEEE, New York. [10] F. Pereira, N. Tishby, and L. Lee. Distributional clustering of english words. In 30th Annual Meeting of the Association for Computational Linguistics, pages 183- 190, Columbus, Ohio, 1993. [11] Jiirgen Schmidhuber. Sequential neural text compression. IEEE Transactions on Neural Networks, 7(1):142- 146, 1996. [12] H. Schutze. Word space. In S. J. Hanson, J. D. Cowan, and C. L. Giles, editors, Advances in Neural Information Processing Systems 5, pages pp. 895- 902, San Mateo CA, 1993. Morgan Kaufmann.
2000
64
1,866
Support Vector Novelty Detection Applied to Jet Engine Vibration Spectra Paul Hayton Department of Engineering Science University of Oxford, UK pmh@robots.ox.ac.uk Lionel Tarassenko Department of Engineering Science University of Oxford, UK lionel@robots.ox.ac.uk Bernhard SchOlkopf Microsoft Research 1 Guildhall Street, Cambridge, UK bsc@scientist.com Paul Anuzis Rolls-Royce Civil Aero-Engines Derby, UK Abstract A system has been developed to extract diagnostic information from jet engine carcass vibration data. Support Vector Machines applied to novelty detection provide a measure of how unusual the shape of a vibration signature is, by learning a representation of normality. We describe a novel method for Support Vector Machines of including information from a second class for novelty detection and give results from the application to Jet Engine vibration analysis. 1 Introduction Jet engines have a number of rigorous pass-off tests before they can be delivered to the customer. The main test is a vibration test over the full range of operating speeds. Vibration gauges are attached to the casing of the engine and the speed of each shaft is measured using a tachometer. The engine on the test bed is slowly accelerated from idle to full speed and then gradually decelerated back to idle. As the engine accelerates, the rotation frequency of the two (or three) shafts increases and so does the frequency of the vibrations caused by the shafts. A tracked order is the amplitude of the vibration signal in a narrow frequency band centered on a harmonic of the rotation frequency of a shaft, measured as a function of engine speed. It tracks the frequency response of the engine to the energy injected by the rotating shaft. Although there are usually some harmonics present, most of the energy in the vibration spectrum is concentrated in the fundamental tracked orders. These therefore constitute the "vibration signature" of the jet engine under test. It is very important to detect departures from the normal or expected shapes of these tracked orders as this provides very useful diagnostic information (for example, for the identification of out-of-balance conditions). The detection of such abnormalities is ideally suited to the novelty detection paradigm for several reasons. Usually, there are far fewer examples of abnormal shapes than normal ones and often there may only be a single example of a particular type of abnormality in the available database. More importantly, the engine under test may show up a type of abnormality which has never been seen before but which should not be missed. This is especially important in our current work where we are adapting the techniques developed for pass-off tests to in-flight monitoring. With novelty detection, we first of all learn a description of normal vibration shapes by including only examples of normal tracked orders in the training data. Abnormal shapes in test engines are subsequently identified by testing for novelty against the description of normality. In our previous work [2], we investigated the vibration spectra of a two-shaft jet engine, the Rolls-Royce Pegasus. In the available database, there were vibration spectra recorded from 52 normal engines (the training data) and from 33 engines with one or more unusual vibration feature (the test data). The shape of the tracked orders was encoded as a lowdimensional vector by calculating a weighted average of the vibration amplitude over six different speed ranges (giving an 18-D vector for three tracked orders). With so few engines available, the K -means clustering algorithm (with K = 4) was used to construct a very simple model of normality, following component-wise normalisation of the 18-D vectors. The novelty of the vibration signature for a test engine was assessed as the shortest distance to one of the kernel centres in the clustering model of normality (each distance being normalised by the width associated with that kernel). When cumulative distributions of novelty scores were plotted both for normal (training) engines and test engines, there was little overlap found between the two distributions [2]. A significant shortcoming of the method, however, is the inability to rank engines according to novelty, since the shortest normalised distance is evaluated with respect to different cluster centres for different engines. In this paper, we re-visit the problem but for a new engine, the RB211-535. We argue that the SVM paradigm is ideal for novelty detection, as it provides an elegant distribution of normality, a direct indication of the patterns on the boundary of normality (the support vectors) and, perhaps most importantly, a ranking of "abnormality" according to distance to the separating hyperplane in feature space. 2 Support Vector Machines for Novelty Detection Suppose we are given a set of "normal" data points X = {Xl, ... , xL}. In most novelty detection problems, this is all we have; however, in the following we shall develop an algorithm that is slightly more general in that it can also take into account some examples of abnormality, Z = {Zl' ... ' zt}. Our goal is to construct a real-valued function which, given a previously unseen test point x, charaterizes the "X -ness" of the point x, i.e. which takes large values for points similar to those in X. The algorithm that we shall present below will return such a function, along with a threshold value, such that a prespecified fraction of X will lead to function values above threshold. In this sense we are estimating a region which captures a certain probability mass. The present approach employs two ideas from support vector machines [6] which are crucial for their fine generalization performance even in high-dimensional tasks: maximizing a margin, and nonlinearly mapping the data into some feature .space F endowed with a dot product. The latter need not be the case for the input domain X which may be a general set. The connection between the input domain and the feature space is established by a feature map <1> : X -+ F, i.e. a map such that some simple kernel [1,6] k(x,y) = (<1>(x)· <1>(y)), (1) such as the Gaussian k(x,y) = e-llx-yIl2/c, (2) provides a dot product in the image of <P. In practice, we need not necessarily worry about <P, as long as a given k satisfies certain positivity conditions [6]. As F is a dot product space, we can use tools of linear algebra and geometry to construct algorithms in F , even if the input domain X is discrete. Below, we derive our results in F, using the following shorthands: (3) (4) Indices i and j are understood to range over 1, ... ,i (in compact notation: i, j E [.e]), similarly, n,p E [t]. Bold face greek letters denote i-dimensional vectors whose components are labelled using normal face typeset. In analogy to an algorithm recently proposed for the estimation of a distribution's support [5], we seek to separate X from the centroid of Z with a large margin hyperplane committing few training errors. Projections on the normal vector of the hyperplane then characterize the "X -ness" of test points, and the area where the decision function takes the value 1 can serve as an approximation of the support of X. While X is the set of normal examples, the (possibly empty) set Z thus only plays the role of, in some weak and possibly imprecise sense, modeling what the unknown "other" examples might look like. The decision function is found by minimizing a weighted sum of a support vector type regularizer and an empirical error term depending on an overall margin variable p and individual errors ~i' min !llw l1 2 + ;l Li ~i - P wEF,~ER l , pER subjectto (W'(Xi-t LnZn)) ~ p- ~i' ~i ~ O. (5) (6) The precise meaning of the parameter v governing the trade-off between the regularizer and the training error will become clear later. Since nonzero slack variables ~i are penalized in the objective function, we can expect that if wand p solve this problem, then the decision function 1 f(x) = sgn((w . (x - t L zn)) - p) (7) n will be positive for many examples Xi contained in X, while the SV type regularization term Ilwll will still be small. This can be shown to correspond to a large margin of separation from t Ln Zn· We next compute a dual form of this optimization problem. The details of the calculation, which uses standard techniques of constrained optimization, can be found in [4]. We introduce a Lagrangian and set the derivatives with respect to w equal to zero, yielding in particular (8) All patterns {Xi: i E [.e], Di > O} are called Support Vectors. The expansion (8) turns the decision function (7) into a form which only depends on dot prducts, f(x) = sgn((LiDi(Xi - t LnZn) . (x - t LnZn)) - p). By multiplying out the dot products, we obtain a form that can be written as a nonlinear decision function on the input domain X in terms of a kernel (1) (cf. (3». A short calculation yields f(x) = sgn (Li Dik(Xi, x) - t Ln k(zn, x) + b Lnp k(zn, zp) - t Lin Dik(Zn, Xi) - p). In the argument of the sgn, only the first two terms depend on x, therefore we may absorb the next terms in the constant p, which we have not fixed yet. To compute p in the final form of the decision function (9) we employ the Karush-Kuhn-Tucker (KKT) conditions of the optimization problem [6, e.g.]. They state that for points Xi where ° < Cli < 1/ (vi), the inequality constraints (6) become equalities (note that in general, Cli E [O,l/(vi)]), and the argument of the sgn in the decision function should equal 0, i.e. the corresponding Xi sits exactly on the hyperplane of separation. The KKT conditions also imply that only those points Xi can have a nonzero Cli for which the first inequality constraint in (6) is precisely met; therefore the support vectors Xi with Cli > ° will often form but a small subset of X. Substituting (8) (the derivative of the Lagrangian by w) and the corresponding conditions for ~ and p into the Lagrangian, we can eliminate the primal variables to get the dual problem. A short calculation shows that it consists of minimizing the quadratic form 1 W(Cl) = 2" L CliClj (k(Xi,Xj) + q - qj - qi), (10) ij where q = b I:np k(zn, zp) and qj = t I:n k(xj, zn), subject to the constraints (11) This convex quadratic program can be solved with standard quadratic programming tools. Alternatively, one can employ the SMO algorithm described in [3], which was found to approximately scale quadratically with the training set size. To illustrate the idea presented in this section, figure 1 shows a 2D example of separating the data from the mean of another data set in feature space. Figure 1: Separating one class of data from the mean of a second data set. The first class is a mixture of three gaussians; the SVM algorithm is used to find the hyperplane in feature space that separates the data from the second set (another Gaussian - the black dots). The image intensity represents the SVM output value which is the measure of novelty. We next state a few theoretical results, beginning with a characterization of the influence of v. To this end, first note that the constraints (11) rule out solutions where v > 1, as in that case, the Qi cannot sum up to 1. Negative values of v are ruled out, too, since they would amount to encouraging (rather than penalizing) training errors in (5). Therefore, in the primal problem (5) only v E (0,1] makes sense. We shall now explain that v actually characterizes how many points of X are allowed to lie outside the region where the decision function is positive. To this end, we introduce the term outlier to denote points Xi that have a nonzero slack variable ~i' i.e. points that lie outside of the estimated region. By the KKT conditions, all outliers are also support vectors; however there can be support vectors (sitting exactly on the margin) that are not outliers. Proposition 1 (v-property) Assume the solution of (5) satisfies p '" 0. The following statements hold: (i) v is an upper bound on the fraction of outliers. (ii) v is a lower bound on the fraction of SVs. (iii) Suppose the data (4) were generated independently from a distribution P(x) which does not contain discrete components. Suppose, moreover, that the kernel is analytic and non-constant. With probability 1, asymptotically, v equals both the fraction of Sv.\· and the fraction of outliers. The proof can be found in [4]. We next state another desirable theoretical result: Proposition 2 (Resistance [3]) Local movements of outliers parallel to w do not change the hyperplane. Essentially, this result is due to the fact that the errors ~i enter in the objective function only linearly. To determine the hyperplane, we need to find the (constrained) extremum of the objective function, and in finding the extremum, the derivatives are what counts. For the linear error term, however, those are constant, so they do not depend on how far away from the hyperplane an error point lies. We conclude this section by noting that if Z is empty, the algorithm is trying to separate the data from the origin in F, and both the decision function and the optimization problem reduce to what is described in [5]. 3 Application of SVM to Jet Engine Pass-off Tests The Support Vector machine algorithm for novelty detection is applied to the pass-off data from a set of 162 Rolls-Royce jet engines. The shape of the tracked order of interest is encoded by calculating a weighted average of the vibration amplitude over ten speed ranges, thereby generating a lOD shape vector. The available data was split into the following three sets: • 99 Normal Engines to be used as training data; • 40 Normal Engines to be used as validation data; • 23 engines labelled as having at least one abnormal aspect in their vibration signature (the "test" data). Using the training dataset, the SVM algorithm finds the hyperplane that separates the normal data from the origin in feature space with the largest margin. The number of support vectors gives an indication of how well the algorithm is generalising (if all data points were support vectors, the algorithm would have memorized the data). A Gaussian kernel was used with a width c = 40.0 in equation 2 which was chosen by starting with a small kernel width (so that the algorithm memorizes the data), increasing the width and stopping when similar results are obtained on the training and validation data. Cumulative novelty distributions are plotted for two different values of v and these are shown in figure 2. The curves show a slight overlap between the normal and test engines. Although it is not given here, a ranking of the engines according to their novelty is also provided to the Rolls-Royce test engineers. No oIEng,r.s No "'Eng'''' rMtEng._ (a) l/ = 0.1 (b) l/ = 0.2 Figure 2: Cumulative novelty distributions for two different values of v. The curves show that there is a slight overlap in the data; For v = 0.1, there are 11 validation engines over the SVM decision boundary and 2 test engines inside the boundary. Separating the Normal Engines from the Test Engines. In a retrospective analysis such as described in this paper (for which the test engines with unusual vibration signatures have already been identified as such by the Rolls-Royce experts), the SVM algorithm can be rerun to find the hyperplane that separates the normal data from the mean of the test data in feature space with the largest margin (instead of separating from the origin). The algorithm is trained on the 99 training engines and 22 of the 23 test engines. Each test engine is left out in tum and the algorithm re-trained to compute its novelty. Cumulative distributions are again plotted (see figure 3) and these show an improved separation between the two sets of engines. It should be noted however, that the improvement is less for the validation engines than for the training engines. Nevertheless, there is an improvement for the validation engines seen from the higher intersection of the distribution with the axis. No.ofEngille5 T .... iningEngin .... Test Engines Novelty (a) (v = 0.1) No.orEngill~S (b) V"'idgtionF.ngi~s TeslEngille!l ,~ Nowlty Figure 3: Cumulative novelty distributions showing the variation of novelty with number of engines for (a) the training data versus the test data (each test engine omitted from the training phase in tum to compute its novelty) and (b) the validation data versus the test data. 4 Discussion This paper has presented a novel application of Support Vector Machines and introduced a method for including information from a second data set when considering novelty detection. The results on the Jet Engine data show very good separation between normal and test engines. We believe Support Vector Machines are an ideal framework for novelty detection and indeed, we have obtained better results than with our previous clustering based algorithms for detecting novel Jet Engine signatures. The present work builds on a previous algorithm for estimating a distribution's support [5]. That algorithm, separating the data from the origin in feature space, suffered from the drawback that the origin played a special role. One way to think of it is as a prior on where, in a novelty detection context, the unknown "other" class lies. The present work alleviates this problem by allowing for the possibility to separate from a point inferred from the data, either from the same class, or from some other data. There is a concern that one could put forward about one of the variants of the presently proposed approach, namely about the case where X and Z are disjoint, and we are separating X from Z's centroid: why not actually train a full binary classifier separating X from all examples from Z, rather that just from its mean? Indeed there might be situations where this is appropriate. More specifically, whenever Z is representative of the instances of the other class that we expect to see in the future, then a binary classification is certainly preferable. However, there can be situations where Z is not representative for the other class, for instance due to nonstationarity. Z may even only consists of artificial examples. In this situation, the only real training examples are the positive ones. In this case, separating the data from the mean of some artificial, or non-representative examples, provides a way of taking into account some information from the other class which might work better than simply separating the positive data from the origin. The philosophy behind our approach is the one advocated by [6]. If you are trying to solve a learning problem, do it directly, rather than solving a more general problem along the way. Applied to the estimation of a distribution's support, this means: do not first estimate a density and then threshold it to get an estimate of the support. Acknowledgments. Thanks to John Platt, John Shawe-Taylor, Alex Smola and Bob Williamson for helpful discussions. References [1] B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In D. Haussler, editor, Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, pages 144-152, Pittsburgh, PA, July 1992. ACM Press. [2] A. Nairac, N. Townsend, R. Carr, S. King, P. Cowley, and L. Tarassenko. A system for the analysis of jet engine vibration data. Integrated Computer-Aided Engineering, 6:53 - 65, 1999. [3] B. SchOlkopf, 1. Platt, J. Shawe-Taylor, AJ. Smola, and R.C. Williamson. Estimating the support of a high-dimensional distribution. TR MSR 99 - 87, Microsoft Research, Redmond, WA, 1999. [4] B. Scholkopf, J. Platt, and A.J. Smola. Kernel method for percentile feature extraction. TR MSR 2000 - 22, Microsoft Research, Redmond, WA, 2000. [5] B. SchOlkopf, R. C. Williamson, A. J. Smola, J. Shawe-Taylor, and J. C. Platt. Support vector method for novelty detection. In S.A. Solla, T.K. Leen, and K.-R. Muller, editors, Advances in Neural Information Processing Systems 12, pages 582- 588. MIT Press, 2000. [6] V. Vapnik. The Nature of Statistical Learning Theory. Springer, N.Y., 1995.
2000
65
1,867
.N-Body. Problems in Statistical Learning Alexander G. Gray Department of Computer Science Carnegie Mellon University agray@cs.cmu.edu Andrew W. Moore Robotics Inst. and Dept. Compo Sci. Carnegie Mellon University awm@cs.cmu.edu Abstract We present efficient algorithms for all-point-pairs problems, or 'Nbody'-like problems, which are ubiquitous in statistical learning. We focus on six examples, including nearest-neighbor classification, kernel density estimation, outlier detection, and the two-point correlation. These include any problem which abstractly requires a comparison of each of the N points in a dataset with each other point and would naively be solved using N 2 distance computations. In practice N is often large enough to make this infeasible. We present a suite of new geometric techniques which are applicable in principle to any 'N-body' computation including large-scale mixtures of Gaussians, RBF neural networks, and HMM 's. Our algorithms exhibit favorable asymptotic scaling and are empirically several orders of magnitude faster than the naive computation, even for small datasets. We are aware of no exact algorithms for these problems which are more efficient either empirically or theoretically. In addition, our framework yields simple and elegant algorithms. It also permits two important generalizations beyond the standard all-point-pairs problems, which are more difficult. These are represented by our final examples, the multiple two-point correlation and the notorious n-point correlation. 1 Introduction This paper is about accelerating a wide class of statistical methods that are naively quadratic in the number of datapoints. 1 We introduce a family of dual kd-tree traversal algorithms for these problems. They are the statistical siblings of powerful state-of-the-art N -body simulation algorithms [1 , 4] of computational physics, but the computations within statistical learning present new opportunities for acceleration and require techniques more general than those which have been exploited for the special case of potential-based problems involving forces or charges. We describe in detail a dual-tree algorithm for calculating the two-point correlation, the simplest case of the problems we consider; for the five other statistical problems we consider, we show only performance results for lack of space. The last of our examples, 1 In the general case, when we are computing distances between two different datasets having sizes Nl and N2, as in nearest-neighbor classification with separate training and test sets, say, the cost is O(NlN2). Figure 1: A kd-tree. (a) Nodes at level 3. (b) Nodes at level 5. The dots are the individual data points. The sizes and positions of the disks show the node counts and centroids. The ellipses and rectangles show the covariances and bounding boxes. (c) The rectangles show the nodes pruned dlITing a RangeSearch for one (depicted) query and radius. (d) More pruning is possible using RangeCount instead of RangeSearch. the n-point correlation, illustrates a generalization from all-point-pairs problems to alln-tuples problems, which are much harder (naively O(N ")). For all the examples, we believe there exist no exact algorithms which are faster either empirically or theoretically, nor any approximate algorithms that are faster while providing guarantees of acceptably high accuracy (as ours do). For n-tuple N -body problems in particular, this type of algorithm design appears to have surpassed the existing computational barriers. In addition, all the algorithms in this paper can be compactly defined and are easy to implement. Statistics and geometry. We proceed by viewing these statistical problems as geometric problems, exploiting the data's hyperstructure. Each algorithm utilizes Multiresolution kd-trees, providing a geometric partitioning of the data space which is used to reason about entire chunks of the data simultaneously. A review of kd-trees and mrkd-trees. A kd-tree [3] records a d-dimensional data set containing N records. Each node represents a set of data points by their bounding box. Non-leaf nodes have two children, obtained by splitting the widest dimension of the parent's bounding box. For the purposes of this paper, nodes are split until they contain only one point, where they become leaves. An mrkd-tree [2, 6] is a conventional kd-tree decorated, at each node, with extra statistics about the node's data, such as their count, centroid, and covariance. They are an instance of the idea of cached sufficient statistics [8] and are quite efficient in practice. 2 See Figure 1. 2 The 2-point correlation function The two-point correlation is a spatial statistic which is of fundamental importance in many natural sciences, in particular astrophysics and biology. It can be thought of roughly as a measure of the dumpiness of a set of points. It is easily defined as the number of pairs of points in a dataset which lie within a given radius l' of each other. 2.1 Previous approaches Quadratic algorithm. The most naive approach is to simply compare each datum to each other one, incrementing a count if the distance between them is less than 1'. This has O(N 2 ) cost, unacceptably high for problems of practical interest. 2 mrkd-trees can be built quickly, in time O( dN log N +d2 N). Although we have not needed to do so, they can modified to become disk-resident for data sets with billions of records, and they can be efficiently updated incrementally. They scale poorly to higher dimensions but recent work [7] significantly remedies the dimensionality problem. Binning and gridding algorithms. The schemes in widespread use [12, 13] are mainly of this sort. The idea of binning is simply to divide the data space into a fine grid defining a set of bins, perform the quadratic algorithm on the bins as if they were individual data, then multiply by the bin sizes as appropriate to get an estimate of the total count. The idea of grid ding is to divide the data space into a coarse grid, perform the quadratic algorithm within each bin, and sum the results over all bins to get an estimate of the total count. These are both of course very approximate methods yielding large errors. They are not usable when r is small or r is large, respectively. Range-searching with a kd-tree. An approach to the two-point correlation computation that has been taken is to treat it as a range-searching problem [5, 10], since kd-trees have been historically almost synonymous with range-searching. The idea is that we will make each datapoint in turn a query point and then execute a range search of the kd-tree to find all other points within distance r of the query. A search is a depth-first traversal of the kd-tree, always checking the minimum possible distance dmin between the query and the hyper-rectangle surrounding the current node. If dmin > r there is no point in visiting the node's children, and computation is saved. We call this exclusion-based pruning. The range searching avoids computing most of the distances between pairs of points further than r apart, which is a considerable saving if r is small. But is it the best we can do? And what if r is large? We now propose several layers of new approaches. 2.2 Better geometric approaches: new algorithms Single-tree search (Range-Counting Algorithm). A straightforward extension can exploit the fact that unlike conventional use of range searching, these statistics frequently don't need to retrieve all the points in the radius but merely to count them. The mrkd-tree has, in each node, the count of the number of data it contains-the simplest kind of cached sufficient statistic. At a given node, if the distance between the query and the farthest point of the bounding box of the data in the node is smaller than the radius r, clearly every datum in the node is within range of the query. We can then simply add the node's stored count to the total count. We call this subsumption. 3 (Note that both exclusion and subsumption are simple computations because the geometric regions are always axis-parallel rectangles.) This paper introduces new single-tree algorithms for most of our examples, though it is not our main focus. Dual-tree search. This is the primary topic of this paper. The idea is to consider the query points in chunks as well, as defined by nodes in a kd-tree. In the general case where the query points are different from the data being queried, a separate kd-tree is built for the query points; otherwise a query node and a data node are simply pointers into the same kd-tree. Dual-tree search can be thought of as a simultaneous traversal of two trees, instead of iterating over the query points in an outer loop and only exploiting single-tree-search in the inner loop. Dual-tree search is based on node-node comparisons while Single-tree search was based on point-node comparisons. Pseudocode for a recursive procedure called TwoPointO is shown in Figure 2. It counts the number of pairs of points (xq E QNODE, Xd E DNoDE) such that IXq xdl < r. Before doing any real work, the procedure checks whether it can perform an exclusion pruning (in which case the call terminates, returning 0) or subsumption pruning (in which case the call terminates, returning the product of the number of points in the two nodes). If neither of these prunes occur, then depending on whether QNODE and/or DNODE are leaves, the corresponding recursive calls are made. 3Subsumption can also be exploited when other aggregate statistics, such as centroids or covariances of sets of points in a range are required [2, 14, 9]. TwoPoint( QNODE,DNODE ,r) if excludes(QNODE,DNODE,r), return; if subsumes(QNoDE,DNoDE,r) total = total + ( count(QNoDE) X count(DNoDE) ); return; if leaf(QNoDE) and leaf(DNoDE) if distance(QNoDE,DNODE) < r, total = total + 1; if leaf(QNoDE) and notleaf(DNoDE) TwoPoint( Q NODE,leftchild (D NODE), r ); Two Point (Q NODE,rightchild (D NODE) ,r ); if notleaf(QNoDE) and leaf(DNoDE) TwoPoint(leftchild(QNoDE ) ,DNoDE,r); TwoPoint( rightchild ( QNODE) ,DNoDE,r); if notleaf(QNoDE) and notleaf(DNoDE) TwoPoint(leftchild(QNoDE) ,leftchild(DNoDE) ,r ); TwoPoint(leftchild ( QNODE) ,rightchild(DNoDE) ,r); TwoPoint(rightchild( QNODE) ,leftchild (DNoDE) ,r); TwoPoint(rightchild(QNoDE) ,rightchild(DNoDE) ,r); Figure 2: A recursive Dual-tree code. All the reported algorithms have a similar brevity. Importantly, both kinds of prunings can now apply to many query points at once, instead of each nearby query point rediscovering the same prune during the Singletree search. The intuition behind Dual-tree's advantage can be seen by considering two cases. First, if l' is so large that all pairs of points are counted then the Single-Tree search will perform O(N) operations, where each query point immediately prunes at the root, while Dual-Tree search will perform 0 (1) operations. Second, if l' is so small that no pairs of points are counted, Single-Tree search will run to one leaf for each query, meaning total work O(N log N ) whereas Dual-tree search will visit each leaf once, meaning O(N) work. Note, however, that in the middle case of a medium-size 1', Dual-tree is theoretically only a constant-factor superior to Single-tree. 4 Non-redundant dual-tree search. So far , we have discussed two operations which cut short the need to traverse the tree further - exclusion and subsumption. Another form of pruning is to eliminate node-node comparisons which have been performed already in the reverse order. This can be done [11] simply by (virtually) ranking the datapoints according to their position in a depth-first traversal of the tree, then recording for each node the minimum and maximum ranks of the points it owns, and pruning whenever QNODE'S maximum rank is less than DNODE's minimum rank. This is useful for all-pairs problems, but becomes essential for all-n-tuples problems. This kind of pruning is not practical for Single-tree search. Figure 3 shows the performance of a two-point correlation algorithm using all the aforementioned pruning methods. Multiple radii simultaneously. Most often in practice, the two-point is computed for many successive radii so that a curve can be plotted, indicating the clumpiness on different scales. Though the method presented so far is fast, it may have to be run once for each of, say, 1,000 radii. It is possible to perform a single, faster computation for all the radii simultaneously, by taking advantage of the nesting structure of the ordered radii, with an algorithm which recursively narrows the radii which still need to 4We'1l summarize the asymptotic analysis briefly. If the data is uniformly distributed in d-dimensional space, the cost of computing the n-point correlation function on a dataset with N points using the Dual-tree (n-tree) algorithm is O( NOnd) where and is the dimensionality of the manifold of n-tuples that are just on the border between being matched and not-matched, and is and = n' (1 - n~;;-l) where n' = min( n, d) For example, the 2-point correlation function in two dimensions is O(N3/2), considerably better than the O(N2) naive algorithm. Disappointingly, for 2-point, this performance is asymptotically the same cost as Single-tree. For n > 2 our algorithm is better. Furthermore, if we can accept an approximate h . (nond)(O nd /(n-O nd) ) h' h ' . d d f N answer, t e cost IS -fw IC IS In epen ent 0 . I Algorithm # Data QuadratIc I Smgle-tree Dual-tree ST Speedup DT Speedup twopoint 10,000 132 2.2 1.2 60 110 twopomt 50,000 3300 est. 11.8 7.0 280 471 twopoint 150,000 30899 est. 37 20 835 1545 twopoint 300,000 123599 est. 76 40 1626 3090 nearest 10,000 139 2.0 1.4 70 99 nearest 20,000 556 est. 11.6 9.8 48 57 nearest 50,000 3475 est. 30.6 26.4 114 132 outliers 10,000 141 2.3 1.2 61 118 outliers 50,000 3525 est. 12 6.5 294 542 outliers 150,000 33006 est. 36 21 917 1572 outliers 300,000 132026 est. 72 44 1834 3001 Figure 3: Our experiments timed our algorithms on large astronomical datasets of current scientific interest, consisting of x-y positions of sky objects from the Sloane Digital Sky Survey. All times are given in seconds, and runs were performed on a Pentium III-500 MHz Linux workstation. The larger runtimes for the quadratic algorithm were estimated based on those for smaller datasets. The dual kd-tree method is about a factor of 2 faster than the single kd-tree method, and both are 3 orders of magnitude faster than the quadratic method for a medium-sized dataset of 300,000 points. I # Data I I 100 I 1000 I Speedup I I # Data I Quadratic I 10 I 10 Speedup 10 ,000 1.2 1.8 2.4 500 10,000 226 1.2 3.0 188 20 ,000 2.8 6.4 6.6 424 50,000 5650 est. 10.4 16.8 543 50 ,000 7.0 31 31 226 150,000 50850 est. 32 65 1589 150 ,000 20 133 146 137 300,000 203400 est. 73 151 2786 Figure 4: (a) Runtimes for multiple 2-point correlation with increasing number of radii, and the speedup factored compared to 1,000 separate Dual-tree 2-point correlations. (b) Runtimes for kernel density estimation with decreasing levels of approximation, controlled by parameter ~, and speedup over quadratic. be considered based on the current closest and farthest distances between the nodes. The details are omitted for space, regrettably. The results in Figure 4 confirm that the algorithm quickly focuses on the radii of relevance: for 150,000 data, computing 1,000 2-point correlations took only 7 times as long as computing one. 3 Kernel density estimation Approximation accelerations. A fourth major type of pruning opportunity is approximation. This is often needed in all-point-pairs computations which involve computing some real-valued function f(x, y) between every pair of points x and y. An example is kernel density estimation with an infinite-tailed kernel such as a Gaussian, in which every training point has some non-zero (though perhaps infinitesimal) contribution to the density at each test point. For each query point Xq we need to accumulate K Ei w(lxq - Xii) where K is a normalizing constant and w is a weighting function (which we will need to assume is monotonic). A recursive call of the Dual-tree implementation has the following job: for Xq E QNODE compute the contribution to xq's summed weights that are due to all points in DNODE. Once again, before doing any real work we use simple rectangle geometry to compute the shortest and furthest possible distances between any (xq , Xd) pair. This bounds the minimum and maximum possible values of Kw(lxq - xdl). If these bounds are tight enough (according to an approximation parameter f) we prune by simply distributing the midpoint weight to all the points in QNODE. I # Data I Time 1000 1 1 < 1 < 1 < 1 2000 13 2 < 1 3 23 10000 1470 3 < 1 6 57 20000 14441 4 < 1 7 73 Figure 5: (a) Runtimes for approximate n-point correlation with t = 0.02 and 20,000 data. (b) Runtimes for approximate 4-point with t = 0.02 and increasing data size. (c) Runtimes for exact n-point, run on 2000 datapoints of galaxies in d-dimensional color space. 4 The n-point correlation, for n > 2 The n-point correlation is the generalization of the 2-point correlation, which counts the number of n-tuples of points lying within radius 7' of each other, or more generally, between some 7'min and 7'max. 5 The implementation is entirely analogous to the 2point case, using n trees in general instead of two, except that there is more benefit in being careful about which of 2n possible recursive calls to choose in the cases where you cannot prune, the approximation versions are harder, there is no immediately analogous Single-tree version of the algorithm, and anti-redundancy pruning is much more important. Figure 5 shows the unprecedented efficiency gains, which become more dramatic as n increases. Approximating 'exact' computations. Even for algorithms such as 2-point, that return exact counts, bounded approximation is possible. Suppose the true value of the 2-point function is V* but that we can tolerate a fractional error of f: we'll accept any value V such that IV - V*I < fV*. It is possible to adapt the dual-tree algorithm using a best-first iterative deepening search strategy to guarantee this result while exploiting permission to approximate effectively by building the count as much as possible from "easy-win" node pairs while doing approximation at hard deep node-pairs. 5 Outlier detection, nearest neighbors, and other problems One of the main intents of this paper is to point out the broad applicability of this type of algorithm within statistical learning. Figure 3 shows performance results for our outlier detection and nearest neighbors algorithms. Figure 6 lists many N-body problems which are clear candidates for acceleration in future work. 6 5The n-point correlation is useful for detailed characterizations of mass distributions (including galaxies and biomasses). Higher-order n-point correlations detect increasingly subtle differences in mass distribution, and are also useful for assessing variance in the lower-order n-point statistics. For example, the three-point correlation, which measures the number of triplets of points meeting the specified geometric constraints, can distinguish between two distributions that have the same 2-point correlations but differ in their degree of "stripiness" versus "spottiness". 6In our nearest neighbors algorithm we consider the problem of finding, for each query point, its single nearest neighbor among the data points. (This is exactly the all-nearestneighbors problem of computational geometry.) The methods are easily generalized to the case of finding the k nearest neighbors, as in k-NN classification and locally weighted regression. Outlier detection is one of the most common statistical operations encountered in data analysis. The question of which procedure is most correct is an open and active one. We present here a natural operation which might be used directly for outlier detection, or within another procedure: for each of the points, find the number of other points that are within distance r of it - those having zero neighbors within r are defined as outliers. (This is exactly the all-range-count problem.) Statistical OperatIOn Results Approximation? What is N ? here? 2-point function Yes Optional # Data n-point function Yes Optional # Data Multiple 2-point function Yes Optional # Data Batch k-nearest neighbor Yes Optional # Data N on-paramet erlc outlier d etectIOn I denOlsmg Yes Optional # Data Batch Kernel density / classify / regression Yes Yes # Data Batch locally weighted regression No Yes # Data Batch kernel PCA No Yes # Data Gaussian process learning and prediction No Yes # Data K-means No Optional # Data, Clusters Mixture of G aussians clustering No Yes # Data, Clusters Hidden Markov model No Yes # Data, States RBF neural network No Yes # Data, Neurons Finding pairs of correlated attributes No Optional # Attributes Finding n-tuples of correlated attributes No Optional # Attributes D ependency-tree learning No Optional # Attributes Figure 6: A very brief sample of applicability of Dual-tree search methods. References [1] J. BaInes and P. Hut. A Hierarchical O(NlogN) Force-Calculation Algorithm. Nature, 324, 1986. [2] K. Deng and A. W. Moore. Multiresolution instance-based learning. In Proceedings of the Twelfth International Joint Conference on Artificial Intelligence, pages 1233- 1239, San Francisco, 1995. Morgan Kaufmann. [3] J. H. Friedman, J. L. Bentley, and R. A. Finkel. An algorithm for finding best matches in logarithmic expected time. A CM Transactions on Mathematical Software, 3(3):209- 226, September 1977. [4] L. Greengard and V. Rokhlin. A Fast Algorithm for Particle Simulations. Journal of Computational Physics, 73, 1987. [5] D. E. Knuth. Sor·ting and Searching. Addison Wesley, 1973. [6] A. W. Moore. Very fast mixture-model-based clustering using multiresolution kd-trees. In M. Kearns and D. Cohn, editors, Advances in Neural Information Processing Systems 10, pages 543- 549, San Francisco, April 1999. Morgan Kaufmann. [7] A. W. Moore. The Anchors Hierarchy: Using the triangle inequality to survive high dimensional data. In Twelfth Conference on Un certainty in A rtificial Intelligence (to appear). AAAI Press, 2000. [8] A. W. Moore and M. S. Lee. Cached Sufficient Statistics for Efficient Machine Learning with Large Datasets. Journal of Artificial Intelligence Research, 8, March 1998. [9] D. Pelleg and A. W. Moore. Accelerating Exact k-means Algorithms with Geometric Reasoning. In Proceedings of the Fifth International Conference on Knowledge Discovery and Data Mining. AAAI Press, 1999. [10] F. P. Preparata and M. Shamos. Computational Geometry. Springer-Verlag, 1985. [11] A. Szalay. Personal Communication. 2000. [12] I. Szapudi. A New Method for Calculating Counts in Cells. The Astrophysical Journal, 1997. [l3] I. Szapudi, S. Colombi, and F. BeInardeau. Cosmic Statistics of Statistics. Monthly Notices of the Royal Astronomical Society, 1999. [14] T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH: An Efficient Data Clustering Method for Very Large Databases. In Proceedings of the Fifteenth ACM SIGACTSIGMOD-SIGART Symposium on Principles of Database Systems : PODS 1996. Assn for Computing Machinery, 1996.
2000
66
1,868
A silicon primitive for competitive learning David Usu Miguel Figueroa Computer Science and Engineering The University of Washington 114 Sieg Hall, Box 352350 Seattle, W A 98195-2350 USA hsud, miguel, diorio@cs.washington.edu Abstract Chris Diorio Competitive learning is a technique for training classification and clustering networks. We have designed and fabricated an 11transistor primitive, that we term an automaximizing bump circuit, that implements competitive learning dynamics. The circuit performs a similarity computation, affords nonvolatile storage, and implements simultaneous local adaptation and computation. We show that our primitive is suitable for implementing competitive learning in VLSI, and demonstrate its effectiveness in a standard clustering task. 1 Introduction Competitive learning is a family of neural learning algorithms that has proved useful for training many classification and clustering networks [1]. In these networks, a neuron's synaptic weight vector typically represents a tight cluster of data points. Upon presentation of a new input to the network, the neuron representing the closest cluster adapts its weight vector, decreasing the difference between the weight vector and present input. Details on this adaptation vary for different competitive learning rules, but the general functionality of the synapse is preserved across various competitive learning networks. These functions are weight storage, similarity computation, and competitive learning dynamics. Many VLSI implementations of competitive learning have been reported in the literature [2]. These circuits typically use digital registers or capacitors for weight storage. Digital storage is expensive in terms of die area and power consumption; capacitive storage typically requires a refresh scheme to prevent weight decay. In addition, these implementations require separate computation and weight-update phases, increasing complexity. More importantly, neural networks built with these circuits typically do not adapt during normal operation. Synapse transistors [3][4] address the problems raised in the previous paragraph. These devices use the floating-gate technology to provide nonvolatile analog storage and local adaptation in silicon. The adaptation mechanisms do not perturb the operation of the device, thus enabling simultaneous adaptation and computation. Unfortunately, the adaptation mechanisms provide dynamics that are difficult to translate into existing neural-network learning rules. Allen et. al. [5] proposed a silicon competitive learning synapse that used floating gate technology in the early 90's. However, that approach suffers from asymmetric adaptation due to separate mechanisms for increasing and decreasing weight values. In addition, they neither characterized the adaptation dynamics of their device, nor demonstrated competitive learning with their device. We present a new silicon primitive, the automaximizing bump circuit, that uses synapse transistors to implement competitive learning in silicon. This ll-transistor circuit computes a similarity measure, provides nonvolatile storage, implements local adaptation, and performs simultaneous adaptation and computation. In addition, the circuit naturally exhibits competitive learning dynamics. In this paper, we derive the properties of the automaximizing bump circuit directly from the physics of synapse transistors, and corroborate our analysis with data measured from a chip fabricated in a 0.351lm CMOS process. In addition, experiments on a competitive learning circuit, and software simulations of the learning rule, show that this device provides a suitable primitive for competitive learning. 2 Synapse transistors The automaxmizing bump circuit's behavior depends on the storage and adaptation properties of synapse transistors. Therefore this section briefly reviews these devices. A synapse transistor comprises a floating-gate MOSFET, with a control gate capacitively coupled to the floating gate, and an associated tunneling implant. The transistor uses floating-gate charge to implement a nonvolatile analog memory, and outputs a source current that varies with both the stored value and the control-gate voltage. The synapse uses two adaptation mechanisms: Fowler-Nordheim tunneling [6] increases the stored charge; impact-ionized hot-electron injection (IHEI) [7] decreases the charge. Because tunneling and IHEI can both be active during normal transistor operation, the synapse enables simultaneous adaptation and computation. A voltage difference between the floating gate and the tunneling implant causes electrons to tunnel from the floating gate, through gate oxide, to the tunneling implant. We can approximate this current (with respect to fixed tunneling and floatinggate voltages, V tunO and V go ) as [4]: (1) where ItunO and Vx are constants that depend on V tunO and V gO, and Ll V tun and Ll Vg are deviations of the tunneling and floating gate voltages from these fixed levels. IHEI adds electrons to the floating gate, decreasing its stored charge. The IHEI current increases with the transistor's source current and drain-to-source voltage; over a small drain-voltage range, we model this dependence as [3][4]: (2) where the constant Vy depends on the VLSI process, and Ut is the thermal voltage. 3 Automaximizing bump circuit The automaximizing bump circuit (Fig. 1) is an adaptive version of the classic bump-antibump circuit [8]. It uses synapse transistors to implement the three essential functions of a competitive learning synapse: storage of a weight value f1" computation of a similarity measure between the input and f1" and the ability to move f1, closer to the input. Both circuits take two inputs, VI and V2, and generate three curVdd (a) V,o(V) (b) Figure 1. (a) Automaximizing bump circuit. MI-M5 form the classic bumpantibump circuit; we added M6-MII and the floating gates. (b) Data showing that the circuit computes a similarity between the input, V in , and the stored value, /-l, for three different stored weights. Yin is represented as VI =+ Vin12, V2=-Vin/2. rents. The two outside currents, II and /Z, are a measure of the dissimilarity between the two inputs; the center current, Imid' is a measure of their similarity: ~ 2 -1 Imid = Ib(l+l\,cosh (K~V)) (3) where A, and K are process and design-dependent parameters, ~ V is the voltage difference between VI and V2 , and h is a bias current. Imid is symmetric with respect to the difference between VI and V2 , and approximates a Gaussian centered at ~ V = O. We augment the bump-anti bump circuit by adding floating gates and tunneling junctions to MI-M5, turning them into synapse transistors; MI and M3 share the same floating gate and tunneling junction, as do M2 and M4. We also add transistors M6MIl to control IHEI. For convenience, we will refer to our new circuit merely as a bump circuit. The charge stored on the bump circuit's floating gates, QI and Q2, shift Imi/S peak away from ~V=O by an amount determined by their difference. We interpret this difference as the weight, p, stored by the circuit, and interpret Imid as a similarity measure between the circuit's input and stored weight. Tunneling and IHEI adapt the bump circuit's weight. The circuit is automaximizing because tunneling and IHEI naturally tune the peak of Imid to coincide with the present input. This high-level behavior coincides with the dynamics of competitive learning; both act to decrease the difference between a stored weight and the applied input. Therefore, no explicit computation of the direction or magnitude of weight updates is necessary-the circuit naturally performs these computations for us. Consequently, we only need to indicate when the circuit should adapt, not how it does adapt. Applying -IOV to Vlun and -OV to Vinj activates adaptation. Applying <8V to Vlun and >2V to Vinj deactivates adaptation. 3.1 Weight storage The bump circuit's weight value derives directly from the charge on its floatinggates. A synapse transistor's floating-gate charge looks, for all practical purposes, like a voltage source, V" applied to the control gate. This voltage source has a value Vs = QICi", where Cin is the control-gate to floating-gate coupling capacitance and Q is the floating gate charge. We encode the input to the bump circuit, Yin, as a differential signal: VI= Vin/2; and V2 =-Vin/2 (similar results will follow for any symmetric encoding of Yin)' As a result, froid computes the similarity between the two floating-gate voltages: Vfgl= VsI+ Vin/2, and Vfg2= Vs2 - Vin/2 where VsI and Vs2 are the voltages due to the charge stored on the floating gates. We define the bump circuit's weight, /1, as: (4) This weight corresponds to the value of Yin that equalizes the two floating-gate voltages (and maximizes froid). Part (b) of Fig. 1 shows the bump circuit's froid output for three weight values, as a function of the differential input. We see that different stored values change the location of the peak, but do not change the shape of the bump. Because floating gate charge is nonvolatile, the weight is also nonvolatile. The differential encoding of the input makes the bump circuit's adaptation symmetric with respect to (Vin-/1). Without loss of generality, we can represent Yin as: (5) If we apply Vin!2 and -Vin!2 to the two input terminals, we arrive at the following two floating-gate voltages: Vfgl = (Vs2 + Vsl + ~n - /1) 1 2 Vfg2 = (Vs2 + Vsl ~n + /1) 1 2 (6) (7) By reversing the sign of (Vin-/1), we obtain the same floating-gate voltages on the opposite terminals. Because the floating gate voltages are independent of the sign of (Vin-/1), the bump circuit's learning rule is symmetric with respect to (Vin-/1). 3.2 Adaptation We now explore the bump circuit's adaptation dynamics. We define L1Vfg=Vfgl-Vfg2' From Eqs. 4-7, we can see that Vin-/1=L1Vfg. Consequently, the learning rate, dfl/dt, is equivalent to -dL1 Vfgldt. In our subsequent derivations, we consider only positive L1 Vfg, because adaptation is symmetric (albeit with a change of sign). We show complete derivations of the equations in this section in [9]. Tunneling causes adaptation by decreasing the difference between the floating-gate voltages Vfgl and Vfg2 . Electron tunneling increases the voltage of both floating gates, but, because tunneling increases exponentially with smaller floating-gate voltages (see Eq.l), tunneling decreases the difference. Assuming that Ml 's floating gate voltage is lower than M2's, the change in L1 Vfg due to electron tunneling is: d L1 Vfg 1 dt = -(I tunl -ftun2 ) 1 Cfg (8) We substitute Eq.1 into Eq.8 and solve for the tunneling learning rule: d Id (.1.Vtun-.1.VO)/Vx . h 12 L1Vfg t = -ftOe . sm ((L1Vfg -f/J) Y.) (9) where ftO=ftunO/Cfp Vx is a model constant, L1 Vo = (L1 Vfgl + L1 Vfg2)12, and f/J models the tunneling mismatch between synapse transistors. This rule depends on three factors: lO ' .--~--~--~--~------, .~ ] ~ ~ .., ,. o injection data ., '0 .d tunneling data I -fit 0.1 0.2 0.3 0.4 .1.V,& (V) (a) 10 ' 10 ' 10' '" ~ 10' .., ,. ., '1 10 111 10' 0 0 0.1 0.2 0.3 .lVi' (V) (b) o data fit 0.4 0.5 Figure 2. (a) Measured adaptation rates, due to tunneling and IHEI, along with fits from Eqs.9 and 11. (b) Composite adaptation rate, along with a fit from (12). We slowed the IHEI adaptation rate (by using a higher Vinj ), compared with the data from part (a), to cause better matching between tunneling and IHEI. a controllable learning rate, ~ Vtun ; the difference between Yin and f.1, ~ Vrg; and the average floating gate voltage, ~ Yo. The circuit also uses IHEI to decrease ~ Vrg. We bias the bump circuit so that only transistors Ml and M2 exhibit IHEI. According to Eq.2, IHEI depends linearly on a transistor's source current, but exponentially on its source-to-drain voltage. Consequently, we decrease ~ Vrg by controlling the drain voltages at Ml and M2. Coupled current mirrors (M6-M7 and M8-M9) at the drains of Ml and M2, simultaneously raise the drain voltage of the transistor that is sourcing a larger current, and lower the drain voltage of the transistor that is sourcing a smaller current. The transistor with the smaller source current will experience a larger Vsd, and thus exponentially more IHEI, causing its source current to rapidly increase. Diodes (MlO and M11) further increase the drain voltage of the transistor with the larger current, further reducing its IHEI. The net effect is that IHEI acts to equalize the currents, and, likewise, the floating gate voltages. Recently Hasler proposed a similar method for controlling IHEI in a floating gate differential pair [4]. Assuming II >h, the change in ~ Vrg due to IHEI is: (10) We expand the learning rule by substituting Eq.2 into Eq.lO. To compute values for the drain voltages of MI and M2, we assume that all of II flows through MIl and all of 12 flows through M7. The IHEI learning rule is given below: d/1 Vf g 1 dt = - IjOe9WO (e -rVi"i<l>l (/1 Vfg ) - e ~V;"i <l>2 (/1 Vfg » where fjo=finjO/Crg, r=-2heVy, 17=-lIVy, and I;=KlVy. <1>1 and <1>2 are given by: (( )/2 h 12 )) l-2U,/KVy -lO~Vfg <I> l (~ Vfg ) = f b f mid cos ( K~ Vfg V t e ( ) / 2 1 2 )) - (l~Vfg ( -K~Vfg I U, )-u, IVy) <l>2(~Vfg) = ( Ib -Imid cosh(clVfg VI e l-e (11) (12) (13) where (J =(I-V/Vy)Kl2Vh and w =Kl2Vt-Kl2Vy-llVy. Like tunneling, the IHEI rule depends on three factors: a controllable learning rate, Vinj ; the difference between Yin and f.1, ~ Vrg; and ~ Yo. Part (a) of Fig. 2 shows measurements of d~ Vrgldt versus ~ Vfg due to tunneling and IHEI, along with fits to Eqs.9 and 11 respectively. IHEI and tunneling facilitate adaptation by adding and removing charge from the floating gates, respectively. Isolated, any of these mechanisms will eventually drive the bump circuit out of its operating range. In order to obtain useful adaptation, we need to activate both mechanisms at the same time. There is an added benefit to combining tunneling and IHEI: Part (a) Fig 2 shows that tunneling acts more strongly for smaller values of ~ Vfg , while IHEI shows the opposite behavior. The mechanisms complement each other, providing adaptation over more than a I V range in ~ Vrg. We combine Eq. 9 and Eq.11 to derive the bump learning rule: (~Vtun - ~I'o)/V, . ¢oVo TViol ~viIj --d~Vrg / dt =/tOe sinh((~Vrg -¢')/2V~)+IjOe (e CPl(~Vrg)-e CP2(~Vfg)) (14) Part (b) of Fig. 2 illustrates the composite weight-update dynamics. When ~ Vfg is small, adaptation is primarily driven by IHEI, while tunneling dominates for larger values of ~ Vfg • The bump learning rule is unlike any learning rule that we have found in the literature. Nevertheless, it exhibits several desirable properties. First, it naturally moves the bump circuit's weight towards the present input. Second, the weight update is symmetric with respect to the difference between the stored value and the present input. Third, we can vary the weight-update rate over many orders of magnitude by adjusting Vlun and Vinj • Finally, because the bump circuit uses synapse transistors to perform adaptation, the circuit can adapt during normal operation. 4 Competitive learning with bump circuits We summarize the results of simulations of the bump learning rule and also results from a competitive learning circuit fabricated in the TSMC 0.35 f.lm process below. For further details consult [9]. We first compared the performance of a software neural network on a standard clustering task, using the bump learning rule (fitted to data from Fig. 2), and a basic competitive learning rule (learning rate p=O.OI): djl / dt = p X CVin - jl) (15) We trained both networks on data drawn from a mixture of 32 Gaussians, in a 32dimensional space. The Gaussian means were drawn from the interval [0,1] and the covariance matrix was the diagonal matrix 0.1 *1. On an input presentation, the network updated the weight vector of the closest neuron using either the bump learning rule, or Eq.15. We measured the performance of the two learning rules by evaluating the coding error of each trained network, on a test set drawn from the same distribution as the training data. The coding error is the sum of the squared distances between each test point and its closest neuron. Part (a) of Fig. 3 shows that the bump circuit's rule performs favorably with the hard competitive learning rule. Our VLSI circuit (Part (b) of Fig. 3) comprised two neurons with a one-dimensional input (a neuron was a single bump circuit), and a feedback network to control adaptation. The feedback network comprised a winner-take-all (WT A) [10] that detected which bump was closest to the present input, and additional circuitry [9] that generated Vtun and Vinj from the WT A output. We tested this circuit on a clustering task, to learn the centers of a mixture of two Gaussians. In part (c) of Fig. 3, we compare the performance of our circuit with a simulated neural network using Eq.15. The VLSI circuit performed comparably with the neural network, demonstrating that our bump circuit, in conjunction with simple feedback mechanisms, can implement competitive learning in VLSI. We can generalize the circuitry to multiple dimensions (multiple bump circuits per neuron) and multiple neurons; each neuron only requires one Vlun and Vinj signal. 3 X 10' 2.6 2.2 t 1.8 1.4 + Hard competitive learning rule o bump learning rule 10~~1 ~ 00~0--2"'0~00n-'3~0~00--'4~ 00~0~<~~0~0--N60~00 number of training examples (a) Figure 3. (a) Comparison of a neural network using the bump learning rule versus a standard competitive learning rule. We drew the training data from a mixture of thirty-two Gaussians, and averaged the results over ten trials. (b) A competitive learning circuit. (c) Performance of a competitive learning circuit versus a neural network for learning a mixture of two Gaussians. Acknowledgements 09 DB 07 + (b) target values circuit output + ~ 06 I + o neural network output = :-: 05 + Q) ~+ :J .. 04 > I 03 o • . . ....... ..... , .0_,-/ 02 .. / '" 0 1 ~ / / O ~~--~--~--~--~~--~--~~ o SOD 1000 1500 2000 2500 3000 3500 4000 4500 number of training examples (c) This work was supported by the NSF under grants BES 9720353 and ECS 9733425, and by a Packard Foundation Fellowship. References [1] M.A. Arbib (ed.), The Handbook of Brain Theory and Neural Networks, Cambridge, MA: The MIT Press, 1995. [2] H.C. Card, D.K. McNeill, and C.R. Schneider, "Analog VLSI circuits for competitive learning networks", in Analog Integrated Circuits and Signal Processing, 15, pp. 291-314, 1998. [3] C. Diorio, "A p-channel MOS synapse transistor with self-convergent memory writes", IEEE Transactions on Electron Devices, vol. 47, no. 2, pp 464-472, 2000. [4] P. Hasler, "Continuous-Time Feedback in Floating-Gate MOS Circuits," to appear in IEEE Transactions on Circuits and Systems IT, Feb. 2001 [5] T. Allen et. aI, "Electrically adaptable neural network with post-processing circuitry," U.S. Patent No. 5,331,215, issued July 19, 1994. [6] M. Lenzlinger and E.H. Snow, "Fowler- Nordheim tunneling into thermally grown Si02", Journal of Applied Physics, vol. 40(1), pp. 278-283, 1969. [7] E. Takeda, C. Yang, and A. Miura-Hamada, Hot Carrier Effects in MOS Devices, San Diego, CA: Academic Press, 1995. [8] T. Delbruck, "Bump circuits for computing similarity and dissimilarity of analog voltages", CNS Memo 26, California Institute of Technology, 1993. [9] D. Hsu, M. Figueroa, and C. Diorio, "A silicon primitive for competitive learning," UW CSE Technical Report no. 2000-07-01, 2000. [10] J. Lazzaro, S. Ryckebusch, M.A. Mahowald, and c.A. Mead, "Winner-take-all networks of O(n) complexity", in Advances in Neural Information Processing Systems, San Mateo, CA: Morgan Kaufman, vol. 1, pp 703-711 , 1989.
2000
67
1,869
Automatic choice of dimensionality for peA Thomas P. Minka MIT Media Lab 20 Ames St, Cambridge, MA 02139 tpminka@media.mit.edu Abstract A central issue in principal component analysis (PCA) is choosing the number of principal components to be retained. By interpreting PCA as density estimation, we show how to use Bayesian model selection to estimate the true dimensionality of the data. The resulting estimate is simple to compute yet guaranteed to pick the correct dimensionality, given enough data. The estimate involves an integral over the Steifel manifold of k-frames, which is difficult to compute exactly. But after choosing an appropriate parameterization and applying Laplace's method, an accurate and practical estimator is obtained. In simulations, it is convincingly better than cross-validation and other proposed algorithms, plus it runs much faster. 1 Introduction Recovering the intrinsic dimensionality of a data set is a classic and fundamental problem in data analysis. A popular method for doing this is PCA or localized PCA. Modeling the data manifold with localized PCA dates back to [4]. Since then, the problem of spacing and sizing the local regions has been solved via the EM algorithm and split/merge techniques [2, 6, 14,5]. However, the task of dimensionality selection has not been solved in a satisfactory way. On the one hand we have crude methods based on eigenvalue thresholding [4] which are very fast, or we have iterative methods [1] which require excessive computing time. This paper resolves the situation by deriving a method which is both accurate and fast. It is an application of Bayesian model selection to the probabilistic PCA model developed by [12, 15]. The new method operates exclusively on the eigenvalues of the data covariance matrix. In the local PCA context, these would be the eigenvalues of the local responsibility-weighted covariance matrix, as defined by [14]. The method can be used to fit different PCA models to different classes, for use in Bayesian classification [11]. 2 Probabilistic peA This section reviews the results of [15]. The PCA model is that a d-dimensional vector x was generated from a smaller k-dimensional vector w by a linear transformation (H, m) plus a noise vector e: x = Hw + m + e. Both the noise and the principal component vector ware assumed spherical Gaussian: (1) The observation x is therefore Gaussian itself: p(xIH, m, v) '" N(m, HHT + vI) (2) The goal of PCA is to estimate the basis vectors H and the noise variance v from a data set D = {Xl, ... , XN }. The probability of the data set is p(DIH,m,v) (27f)-Nd/2IHHT + vII-N/2 exp(-~tr((HHT + VI)-lS)) (3) S = I)Xi - m)(xi - m)T (4) As shown by [15], the maximum-likelihood estimates are: A 1 ~ m= N~xi i "'~ A' A _ L."J=k+l J V d-k (5) where orthogonal matrix U contains the top k eigenvectors of SIN, diagonal matrix A contains the corresponding eigenvalues, and R is an arbitrary orthogonal matrix. 3 Bayesian model selection Bayesian model selection scores models according to the probability they assign the observed data [9, 8]. It is completely analogous to Bayesian classification. It automatically encodes a preference for simpler, more constrained models, as illustrated in figure 1. Simple models only fit a small fraction of data sets, but they assign correspondingly higher probability to those data sets. Flexible models spread themselves out more thinly. The probability of the data given the model is computed by integrating over the unknown parameter values in that model: p(D I M) n. ~""";"'" model flexible model ------~--_r~------ D constrained flexible model wins model wins Figure 1: Why Bayesian model selection prefers simpler models p(DIM) = fo p(DIO)p(OIM)dO (6) This quantity is called the evidence for model M. A useful property of Bayesian model selection is that it is guaranteed to select the true model, if it is among the candidates, as the size of the dataset grows to infinity. 3.1 The evidence for probabilistic peA For the PCA model, we want to select the subspace dimensionality k. To do this, we compute the probability of the data for each possible dimensionality and pick the maximum. For a given dimensionality, this requires integrating over all PCA parameters (m, H, v). First we need to define a prior density for these parameters. Assuming there is no information other than the data D, the prior should be as noninformative as possible. A non informative prior for m is uniform, and with such a prior we can integrate out m analytically, leaving p(DIH, v) = N-d/2(27f)-(N-1)d/2IHHT + vII-(N-1)/2 exp( -~tr((HHT +VI)-lS)) (7) where S = ~)Xi - m)(Xi - m)T (8) Unlike m, H must have a proper prior since it varies in dimension for different models. Let H be decomposed just as in (5): (9) where L is diagonal with diagonal elements k The orthogonal matrix U is the basis, L is the scaling (corrected for noise), and R is a rotation within the subspace (which will turn out to be irrelevant). A conjugate prior for (U, L, R, v), parameterized by a, is p(U,L,R,v) ex IHHT +vII-(a+2)/2exp(_~tr((HHT +VI)-l)) (10) This distribution happens to factor into p(U)p(L )p(R)p( v) , which means the variables are a-priori independent: p(L) ex ILI-(a+2)/2 exp( -::tr(L -1)) 2 (11) p(v) ex v-(a+2)(d-k)/2 exp( _ a(d - k)) 2v (12) p(U)p(R) (constant-defined in (20» (13) The hyperparameter a controls the sharpness of the prior. For a noninformative prior, a should be small, making the prior diffuse. Besides providing a convenient prior, the decomposition (9) is important for removing redundant degrees of freedom (R) and for separating H into independent components, as described in the next section. Combining the likelihood with the prior gives p(Dlk) =Ck /IHHT +vII-n/2exp(-~tr((HHT +vI)-l(S+aI)))dUdLdv (14) n = N + 1 + a (15) The constant Ck includes N-d/2 and the normalizing terms for p(U), p(L), and p(v) (given in [lO])-only p(U) will matter in the end. In this formula R has already been integrated out; the likelihood does not involve R so we just get a multiplicative factor of JRP(R) dR = 1. 3.2 Laplace approximation Laplace's method is a powerful method for approximating integrals in Bayesian statistics [8]: / f(())d() ~ f(B)(27f),ows(A)/2IAI- 1/ 2 (16) (17) The key to getting a good approximation is choosing a good parameterization for () = (U, L, v). Since li and v are positive scale parameters, it is best to use l~ = log(li) and v' = log( v). This results in f. _ NAi + a: ,- N-1+a: ~ N~:=k+1 Aj v= n(d-k)-2 (18) d2 10g f((}) I = _ N - 1 + a: (dlD2 ()=o 2 d2 10g f((}) I = _ n(d - k) - 2 (19) (dV')2 ()=o 2 The matrix U is an orthogonal k-frame and therefore lives on the Stiefel manifold [7], which is defined by condition (9). The dimension of the manifold is m = dk - k(k + 1) /2, since we are imposing k(k + 1)/2 constraints on a d x k matrix. The prior density for U is the reciprocal of the area of the manifold [7]: k p(U) = Tk II r((d - i + 1)/2)7f-(d-i+1)/2 (20) i=l A useful parameterization of this manifold is given by the Euler vector representation: (21) where U d is a fixed orthogonal matrix and Z is a skew-symmetric matrix of parameters, such as Z = [-~12 Zt/ ~~: 1 (22) - Z13 -Z23 0 The first k rows of Z determine the first k columns of exp(Z), so the free parameters are Zij with i < j and i ::; k; the others are constant. This gives d(d-1)/2-(d-k)(d-k-1)/2 = m parameters, as desired. For example, in the case (d = 3, k = 1) the free parameters are Z12 and Z13, which define a coordinate system for the sphere. As a function of U, the integrand is simply 1 p(UID, L, v) ex: exp( -2tr((L -1 - v-1I)UT SU)) (23) The density is maximized when U contains the top k eigenvectors of S. However, the density is unchanged if we negate any column of U. This means that there are actually 2k different maxima, and we need to apply Laplace's method to each. Fortunately, these maxima are identical so can simply multiply (16) by 2k to get the integral over the whole manifold. If we set U d to the eigenvectors of S: uIsud = N A (24) then we just need to apply Laplace's method at Z = O. As shown in [10], if we define the estimated eigenvalue matrix A = [~ VI~-J (25) then the second differential at Z = 0 simplifies to k d 2 I " " ~ -1 ~ -1 2 d logf((}) Z=Q = - L...J L...J (\ - \ )(Ai - Aj)Ndzij (26) i=l j=i+1 There are no cross derivatives; the Hessian matrix Az is diagonal. So its determinant is the product of these second derivatives: k d IAzl = II II (.~j1 ~i1)(Ai - Aj)N (27) i=l j=i+1 Laplace's method requires this to be nonsingular, so we must have k < N. The crossderivatives between the parameters are all zero: cP log 1(0) I = d2 10g 1(0) I = d2 10g 1(0) I = 0 (28) dlidZ 0=0 dvdZ 0=0 dlidv 0=0 so A is block diagonal and IAI = IAzIIALIIAvl. We know AL from (19), Av from (19), and Az from (27). We now have all of the terms needed in (16), and so the evidence approximation is p(Dlk) RJ 2kck i v-n(d-k)/2e-nd/2(27r)(m+k+1)/2IAzl-l/2IALI-1/2IAvl-1/2 I l -n/2 (29) For model selection, the only terms that matter are those that strongly depend on k, and since D: is small and N reasonably large we can simplify this to p(Dlk) RJ p(U) (g A;) -NI'iJ-N(,-.)I'(2.)(m+k)I' IAzl-'I' N-'I' ~ Et=k+l Aj v = d- k (30) (31) which is the recommended formula. Given the eigenvalues, the cost of computing p(D Ik) is O(min(d, N)k), which is less than one loop over the data matrix. A simplification of Laplace's method is the BIC approximation [8]. This approximation drops all terms which do not grow with N, which in this case leaves only ( ) -N/2 p(Dlk) RJ g Aj v- N (d-k)/2 N-(m+k)/2 (32) BIC is compared to Laplace in section 4. 4 Results To test the performance of various algorithms for model selection, we sample data from a known model and see how often the correct dimensionality is recovered. The seven estimators implemented and tested in this study are Laplace's method (30), BIC (32), the two methods of [13] (called RR-N and RR-U), the algorithm in [3] (ER), the ARD algorithm of [1], and 5-fold cross-validation (CV). For cross-validation, the log-probability assigned to the held-out data is the scoring function. ER is the most similar to this paper, since it performs Bayesian model selection on the same model, but uses a different kind of approximation combined with explicit numerical integration. RR-N and RR-U are maximum likelihood techniques on models slightly different than probabilistic PCA; the details are in [10]. ARD is an iterative estimation algorithm for H which sets columns to zero unless they are supported by the data. The number of nonzero columns at convergence is the estimate of dimensionality. Most of these estimators work exclusively from the eigenvalues of the sample covariance matrix. The exceptions are RR-U, cross-validation, and ARD; the latter two require diagonalizing a series of different matrices constructed from the data. In our implementation, the algorithms are ordered from fastest to slowest as RR-N, mc, Laplace, cross-validation, RR-U, ARD, and ER (ER is slowest because of the numerical integrations required). The first experiment tests the data-rich case where N > > d. The data is generated from a lO-dimensional Gaussian distribution with 5 "signal" dimensions and 5 noise dimensions. The eigenvalues of the true covariance matrix are: Signal Noise N = 100 108642 1(x5) The number of times the correct dimensionality (k = 5) was chosen over 60 replications is shown at right. The differences between ER, Laplace, and CV are not statistically significant. Results below the dashed line are worse than Laplace with a significance level of 95%. The second experiment tests the case of sparse data and low noise: Signal Noise N= 10 108642 0.1 (xl0) The results over 60 replications are shown at right. BIC and ER, which are derived from large N approximations, do poorly. Cross-validation also fails, because it doesn't have enough data to work with. The third experiment tests the case of high noise dimensionality: Signal Noise N=60 10 8 642 0.25 (x95) The ER algorithm was not run in this case because of its excessive computation time for large d. The final experiment tests the robustness to having a non-Gaussian data distribution within the subspace. We start with four sound fragments of 100 samples each. To make things especially non-Gaussian, the values in third fragment are squared and the values in the fourth fragment are cubed. All fragments are standardized to zero mean and unit variance. Gaussian noise in 20 dimensions is added to get: Signal Noise N = 100 4 sounds 0.5 (x20) The results over 60 replications of the noise (the signals were constant) are reported at right. 5 Discussion ER Laplace CV BIC ARD RRN RRU Laplace CV ARD RRU BlC RAN Laplace ARD CV BIC RRN RRU ER Bayesian model selection has been shown to provide excellent performance when the assumed model is correct or partially correct. The evaluation criterion was the number of times the correct dimensionality was chosen. It would also be useful to evaluate the trained model with respect to its performance on new data within an applied setting. In this case, Bayesian model averaging is more appropriate, and it is conceivable that a method like ARD, which encompasses a soft blend between different dimensionalities, might perform better by this criterion than selecting one dimensionality. It is important to remember that these estimators are for density estimation, i.e. accurate representation of the data, and are not necessarily appropriate for other purposes like reducing computation or extracting salient features. For example, on a database of 301 face images the Laplace evidence picked 120 dimensions, which is far more than one would use for feature extraction. (This result also suggests that probabilistic PCA is not a good generative model for face images.) References [1] C. Bishop. Bayesian PCA. In Neural Information Processing Systems 11, pages 382- 388, 1998. [2] C. Bregler and S. M. Omohundro. Surface learning with applications to lipreading. In NIPS, pages 43- 50, 1994. [3] R. Everson and S. Roberts. Inferring the eigenvalues of covariance matrices from limited, noisy data. IEEE Trans Signal Processing, 48(7):2083- 2091, 2000. http : //www. robots . ox . ac . uk/-sjrob/Pubs/spectrum .ps . gz. [4] K. Fukunaga and D. Olsen. An algorithm for finding intrinsic dimensionality of data. IEEE Trans Computers, 20(2):176-183,1971. [5] Z. Ghahramani and M. Beal. Variational inference for Bayesian mixtures of factor analysers. In Neural Information Processing Systems 12, 1999. [6] Z. Ghahramani and G. Hinton. The EM algorithm for mixtures of factor analyzers. Technical Report CRG-TR-96-1, University of Toronto, 1996. http : //www . gatsby . ucl . ac . uk/-zoubin/pape rs . html. [7] A. James. Normal multivariate analysis and the orthogonal group. Annals of Mathematical Statistics, 25(1):40- 75, 1954. [8] R. E. Kass and A. E. Raftery. Bayes factors and model uncertainty. Technical Report 254, University of Washington, 1993. http : //www . st at . wa shington . edu/te ch . reports/tr254 . ps. [9] D. J. C. MacKay. Probable networks and plausible predictions a review of practical Bayesian methods for supervised neural networks. Network: Computation in Neural Systems, 6:469- 505, 1995. http : //wol . r a. phy . cam .ac . uk/mackay/abstra cts/ne twork . html. [10] T. Minka. Automatic choice of dimensionality for PCA. Technical Report 514, MIT Media Lab Vision and Modeling Group, 1999. f tp : //whit e chapel . media . mit .edu/pub/tech-reports/TR-514ABSTRACT. html. [11] B. Moghaddam, T. Jebara, and A. Pentland. Bayesian modeling of facial similarity. In Neural Information Processing Systems 11, pages 910-916, 1998. [12] B. Moghaddam and A. Pentland. Probabilistic visual learning for object representation. IEEE Trans Pattern Analysis and Machine Intelligence, 19(7):696-710, 1997. [13] J. J. Rajan and P. J. W. Rayner. Model order selection for the singular value decomposition and the discrete Karhunen-Loeve transform using a Bayesian approach. lEE Vision, Image and Signal Processing, 144(2):166- 123, 1997. [14] M. E. Tipping and C. M. Bishop. Mixtures of probabilistic principal component analysers. Neural Computation, 11(2):443-482, 1999. http : //cite s eer . nj . ne c . com/362314 . html. [15] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. J Royal Statistical Society B, 61(3), 1999.
2000
68
1,870
Propagation Algorithms for Variational Bayesian Learning Zoubin GhahraIllani and Matthew J. Beal Gatsby Computational Neuroscience Unit University College London 17 Queen Square, London WC1N 3AR, England {zoubin,m.beal}~gatsby.ucl.ac.uk Abstract Variational approximations are becoming a widespread tool for Bayesian learning of graphical models. We provide some theoretical results for the variational updates in a very general family of conjugate-exponential graphical models. We show how the belief propagation and the junction tree algorithms can be used in the inference step of variational Bayesian learning. Applying these results to the Bayesian analysis of linear-Gaussian state-space models we obtain a learning procedure that exploits the Kalman smoothing propagation, while integrating over all model parameters. We demonstrate how this can be used to infer the hidden state dimensionality of the state-space model in a variety of synthetic problems and one real high-dimensional data set. 1 Introduction Bayesian approaches to machine learning have several desirable properties. Bayesian integration does not suffer overfitting (since nothing is fit to the data). Prior knowledge can be incorporated naturally and all uncertainty is manipulated in a consistent manner. Moreover it is possible to learn model structures and readily compare between model classes. Unfortunately, for most models of interest a full Bayesian analysis is computationally intractable. Until recently, approximate approaches to the intractable Bayesian learning problem had relied either on Markov chain Monte Carlo (MCMC) sampling, the Laplace approximation (Gaussian integration), or asymptotic penalties like BIC. The recent introduction of variational methods for Bayesian learning has resulted in the series of papers showing that these methods can be used to rapidly learn the model structure and approximate the evidence in a wide variety of models. In this paper we will not motivate advantages of the variational Bayesian approach as this is done in previous papers [1, 5]. Rather we focus on deriving variational Bayesian (VB) learning in a very general form, relating it to EM, motivating parameter-hidden variable factorisations, and the use of conjugate priors (section 3). We then present several theoretical results relating VB learning to the belief propagation and junction tree algorithms for inference in belief networks and Markov networks (section 4). Finally, we show how these results can be applied to learning the dimensionality of the hidden state space of linear dynamical systems (section 5). 2 Variational Bayesian Learning The basic idea of variational Bayesian learning is to simultaneously approximate the intractable joint distribution over both hidden states and parameters with a simpler distribution, usually by assuming the hidden states and parameters are independent; the log evidence is lower bounded by applying Jensen's inequality twice: In P(yIM) > / dO Qo(O) [/ dx Qx(x) In P(~I(~)M) + In Pci~~~)] (1) = F(Qo(O),Qx(x),y) where y, x, 0 and M, are observed data, hidden variables, parameters and model class, respectively; P(OIM) is a parameter prior under model class M. The lower bound F is iteratively maximised as a functional of the two free distributions, Qx(x) and Qo(O). From (1) we can see that this maximisation is equivalent to minimising the KL divergence between Qx(x)Qo(O) and the joint posterior over hidden states and parameters P(x, Oly, M). This approach was first proposed for one-hidden layer neural networks [6] under the restriction that Qo(O) is Gaussian. It has since been extended to models with hidden variables and the restrictions on Qo(O) and Qx(x) have been removed in certain models to allow arbitrary distributions [11, 8, 3, 1, 5]. Free-form optimisation with respect to the distributions Qo(O) and Qx(x) is done using calculus of variations, often resulting in algorithms that appear closely related to the corresponding EM algorithm. We formalise this relationship and others in the following sections. 3 Conjugate-Exponential Models We consider variational Bayesian learning in models that satisfy two conditions: Condition (1). The complete data likelihood is in the exponential family: P(x,yIO) = f(x,y) g(O)exp{¢(O)T u(x,y)} where ¢( 0) is the vector of natural parameters, and u and f and g are the functions that define the exponential family. The list of latent-variable models of practical interest with complete-data likelihoods in the exponential family is very long. We mention a few: Gaussian mixtures, factor analysis, hidden Markov models and extensions, switching state-space models, Boltzmann machines, and discrete-variable belief networks. 1 Of course, there are also many as yet undreamed-of models combining Gaussian, Gamma, Poisson, Dirichlet, Wishart, Multinomial, and other distributions. Condition (2) . The parameter prior is conjugate to the complete data likelihood: P(OI"7, v) = h("7, v) g(O)'1 exp {¢(O) TV} where "7 and v are hyperparameters of the prior. Condition (2) in fact usually implies condition (1). Apart from some irregular cases, it has been shown that the exponential families are the only classes of distributions with a fixed number of sufficient statistics, hence allowing them to have natural conjugate priors. From the definition of conjugacy it is easy to see that the hyperparameters of a conjugate prior can be interpreted as the number ("7) and values (v) of pseudo-observations under the corresponding likelihood. We call models that satisfy conditions (1) and (2) conjugate-exponential. IModels whose complete-data likelihood is not in the exponential family (such as ICA with the logistic nonlinearity, or sigmoid belief networks) can often be approximated by models in the exponential family with additional hidden variables. In Bayesian inference we want to determine the posterior over parameters and hidden variables P(x, 91y, 'f'J, v). In general this posterior is neither conjugate nor in the exponential family. We therefore approximate the true posterior by the following factorised distribution: P(x, 91y, 'f'J, v) :::::i Q(x, 9) = Qx(x)Q9(9), and minimise KL(QIIP) = fdX d9 Q(X, 9) In (Q~7,9) ) P x, Y,'f'J,V which is equivalent to maximising F(Qx(X),Q9(9),y). We provide several general results with no proof (the proofs follow from the definitions and Gibbs inequality). Theorem 1 Given an iid data set Y = (Yl, ... Y n), if the model satisfies conditions (1) and (2), then at the maxima of F(Q,y) (minima of KL(QIIP)): (a) Q9(9) is conjugate and of the form: Q9(9) = h(ij, v)g(9)7) exp {4>(9) Tv} where ij = 'f'J+n, v = v+ L~=l U(Yi), and U(Yi) = (U(Xi,yi))Q, using (.)Q to denote expectation under Q. (b) Qx (x) = TI~=l Qx. (Xi) and Qx. (Xi) is of the same form as the known parameter posterior: Qx. (Xi) ex: f(xi,Yi)exp{¢(9)Tu (xi,yi)} =P(xiIYi,¢(9)) where ¢(9) = (4)(9))Q. Since Q9(9) and Qx.(Xi) are coupled, (a) and (b) do not provide an analytic solution to the minimisation problem. We therefore solve the optimisation problem numerically by iterating between the fixed point equations given by (a) and (b), and we obtain the following variational Bayesian generalisation of the EM algorithm: VE Step: Compute the expected sufficient statistics t(y) = Li U(Yi) under the hidden variable distributions Qx. (Xi). VM Step: Compute the expected natural parameters ¢( 9) under the parameter distribution given by ij and v. This reduces to the EM algorithm if we restrict the parameter density to a point estimate (Le. Dirac delta function), Q9(9) = 15(9 - 9*), in which case the M step involves re-estimating 9*. Note that unless we make the assumption that the parameters and hidden variables factorise, we will not generally obtain the further hidden variable factorisation over n in (b). In that case, the distributions of Xi and Xj will be coupled for all cases i,j in the data set, greatly increasing the overall computational complexity of inference. 4 Belief Networks and Markov Networks The above result can be used to derive variational Bayesian learning algorithms for exponential family distributions that fall into two important special classes.2 Corollary 1: Conjugate-Exponential Belief Networks. Let M be a conjugate-exponential model with hidden and visible variables z = (x, y) that satisfy a belief network factorisation. That is, each variable Zj has parents zp; and P(zI9) = TIj P(Zjlzp;,9). Then the approximating joint distribution for M satisfies the same belief network factorisation: Qz(z) = II Q(zjlzp;,ij) j 2 A tutorial on belief networks and Markov networks can be found in [9]. where the conditional distributions have exactly the same form as those in the original model but with natural parameters ¢(O) = ¢(9). Furthermore, with the modified parameters 0, the expectations under the approximating posterior Qx(x) ex: Qz(z) required for the VE Step can be obtained by applying the belief propagation algorithm if the network is singly connected and the junction tree algorithm if the network is multiply-connected. This result is somewhat surprising as it shows that it is possible to infer the hidden states tractably while integrating over an ensemble of model parameters. This result generalises the derivation of variational learning for HMMs in [8], which uses the forward-backward algorithm as a subroutine. Theorem 2: Markov Networks. Let M be a model with hidden and visible variables z = (x, y) that satisfy a Markov network factorisation. That is, the joint density can be written as a product of clique-potentials 'lj;j, P(zI9) = g(9) TIj 'lj;j(Cj , 9), where each clique Cj is a subset of the variables in z. Then the approximating joint distribution for M satisfies the same Markov network factorisation: Qz(z) = 9 II ¢j (Cj ) j where ¢j (Cj ) = exp { (In 'lj;j (Cj , 9))Q} are new clique potentials obtained by averaging over Qe(9), and 9 is a normalisation constant. Furthermore, the expectations under the approximating posterior Qx(x) required for the VE Step can be obtained by applying the junction tree algorithm. Corollary 2: Conjugate-Exponential Markov Networks. Let M be a conjugate-exponential Markov network over the variables in z. Then the approximating joint distribution for M is given by Qz(z) = gTIj 'lj;j(Cj,O), where the clique potentials have exactly the same form as those in the original model but with natural parameters ¢(O) = ¢(9). For conjugate-exponential models in which belief propagation and the junction tree algorithm over hidden variables is intractable further applications of Jensen's inequality can yield tractable factorisations in the usual way [7]. In the following section we derive a variational Bayesian treatment of linearGaussian state-space models. This serves two purposes. First, it will illustrate an application of Theorem 1. Second, linear-Gaussian state-space models are the cornerstone of stochastic filtering, prediction and control. A variational Bayesian treatment of these models provides a novel way to learn their structure, i.e. to identify the optimal dimensionality of their state-space. 5 State-space models In state-space models (SSMs), a sequence of D-dimensional real-valued observation vectors {Yl,'" ,YT}, denoted Yl:T, is modeled by assuming that at each time step t, Yt was generated from a K-dimensional real-valued hidden state variable Xt, and that the sequence of x's define a first-order Markov process. The joint probability of a sequence of states and observations is therefore given by (Figure 1): T P(Xl:T' Yl:T) = P(Xl)P(Yllxl) II P(Xt IXt-l)P(Yt IXt). t=2 We focus on the case where both the transition and output functions are linear and time-invariant and the distribution of the state and observation noise variables is Gaussian. This model is the linear-Gaussian state-space model: Yt = CXt +Vt ~~···1 T ® @ © ~ Figure 1: Belief network representation of a state-space model. where A and C are the state transition and emission matrices and Wt and Vt are state and output noise. It is straightforward to generalise this to a linear system driven by some observed inputs, Ut. A Bayesian analysis of state-space models using MCMC methods can be found in [4]. The complete data likelihood for state-space models is Gaussian, which falls within the class of exponential family distributions. In order to derive a variational Bayesian algorithm by applying the results in the previous section we now turn to defining conjugate priors over the parameters. Priors. Without loss of generality we can assume that Wt has covariance equal to the unit matrix. The remaining parameters of a linear-Gaussian state-space model are the matrices A and C and the covariance matrix of the output noise, Vt , which we will call R and assume to be diagonal, R = diag(p)-l, where Pi are the precisions (inverse variances) associated with each output. Each row vector of the A matrix, denoted a"[, is given a zero mean Gaussian prior with inverse covariance matrix equal to diag( a). Each row vector of C, c"[, is given a zero-mean Gaussian prior with precision matrix equal to diag(pi,8). The dependence of the precision of c"[ on the noise output precision Pi is motivated by conjugacy. Intuitively, this prior links the scale of the signal and noise. The prior over the output noise covariance matrix, R, is defined through the precision vector, p, which for conjugacy is assumed to be Gamma distributed3 with hyperparameters a and b: P(p la, b) = I1~1 A:) p~-l exp{ -bpi}. Here, a, ,8 are hyperparameters that we can optimise to do automatic relevance determination (ARD) of hidden states, thus inferring the structure of the SSM. Variational Bayesian learning for SSMs Since A, C, p and Xl :T are all unknown, given a sequence of observations Yl:T, an exact Bayesian treatment of SSMs would require computing marginals of the posterior P(A, C, p, xl:TIY1:T). This posterior contains interaction terms up to fifth order (for example, between elements of C, x and p), and is not analytically manageable. However, since the model is conjugate-exponential we can apply Theorem 1 to derive a variational EM algorithm for state-space models analogous to the maximumlikelihood EM algorithm [10]. Moreover, since SSMs are singly connected belief networks Corollary 1 tells us that we can make use of belief propagation, which in the case of SSMs is known as the Kalman smoother. Writing out the expression for log P(A, C, p, Xl :T, n :T), one sees that it contains interaction terms between p and C, but none between A and either p or C. This observation implies a further factorisation, Q(A, C,p) = Q(A)Q(C, p), which falls out of the initial factorisation and the conditional independencies of the model. Starting from some arbitrary distribution over the hidden variables, the VM step obtained by applying Theorem 1 computes the expected natural parameters of Q9((}), where (} = (A,C,p). 3More generally, if we let R be a full covariance matrix for conjugacy we would give its inverse V = R- 1 a Wishart distribution: P(Vlv, S) ex IVI(v-D-1)/2 exp {-~tr VS- 1 } , where tr is the matrix trace operator. We proceed to solve for Q(A). We know from Theorem 1 that Q(A) is multivariate Gaussian, like the prior, so we only need to compute its mean and covariance. A has mean ST(diag(o:) + W)-l and each row of A has covariance (diag(o:) + W)-I, where S = Ei'=2 (Xt-lXi) , W = Ei'.;/ (Xtxi), and (.) denotes averaging w.r.t. the Q(Xl:T) distribution. Q (C, p) is also of the same form as the prior. Q (p) is a product of Gamma densities Q(Pi) = 9(Pi; ii, bi) where ii = a + t, bi = b + ~gi' gi = Ei'=l yfi - Ui(diag(,8) + W,)-lUi, Ui = Ei'=l Yti(xi) and W' = W + (XTXj:). Given p, each row of C is Gaussian with covariance COV(Ci) = (diag(,8) + W,)-l / Pi and mean Ci = Pi Ui COV(Ci). Note that S, W and Ui are the expected complete data sufficient statistics IT mentioned in Theorem l(a). Using the parameter distributions the hyperparameters can also be optimised.4 We now turn to the VE step: computing Q(Xl:T). Since the model is a conjugateexponential singly-connected belief network, we can use belief propagation (Corollary 1). For SSMs this corresponds to the Kalman smoothing algorithm, where every appearance of the natural parameters of the model is replaced with the following corresponding expectations under the Q distribution: (PiCi), (PiCiCi) , (A), (A T A). Details can be found in [2]. Like for PCA [3], independent components analysis [1], and mixtures of factor analysers [5], the variational Bayesian algorithm for state-space models can be used to learn the structure of the model as well as average over parameters. Specifically, using F it is possible to compare models with different state-space sizes and optimise the dimensionality of the state-space, as we demonstrate in the following section. 6 Results Experiment 1: The goal of this experiment was to see if the variational method could infer the structure of a variety of state space models by optimising over 0: and ,8. We generated a 200-step time series of 10-dimensional data from three models:5 (a) a factor analyser (Le. an SSM with A = 0) with 3 factors (static state variables); (b) an SSM with 3 dynamical interacting state variables, Le. A::p 0; (c) an SSM with 3 interacting dynamical and 1 static state variables. The variational Bayesian method correctly inferred the structure of each model in 2-3 minutes of CPU time on a 500 MHz Pentium III (Fig. 2 (a)- (c)). Experiment 2: We explored the effect of data set size on complexity of the recovered structure. 10-dim time series were generated from a 6 state-variable SSM. On reducing the length of the time series from 400 to 10 steps the recovered structure became progressively less complex (Fig. 2(d)- (j)), to a 1-variable static model (j). This result agrees with the Bayesian perspective that the complexity of the model should reflect the data support. Experiment 3 (Steel plant): 38 sensors (temperatures, pressures, etc) were sampled at 2 Hz from a continuous casting process for 150 seconds. These sensors covaried and were temporally correlated, suggesting a state-space model could capture some of its structure. The variational algorithm inferred that 16 state variables were required, of which 14 emitted outputs. While we do not know whether this is reasonable structure we plan to explore this as well as other real data sets. 4The ARD hyperparameters become Ok = (A T~}kk ' and 13k = (C Tdia~p)C}kk . The hyperparameters a and b solve the fixed point equations '1j;(a) = Inb+ i ~~l (Inpi), and t = ab ~~l (Pi) , where '1j;(w) = BBw In r(w) is the digamma function. sParameters were chosen as follows: R = I, and elements of C sampled from '" Unif( -5,5), and A chosen with eigen-values in [0.5,0.9]. x,_, x, V, J · ~ · · · · v, x,_, x, Y, x,_, Y, V, x,_, V, Xt_, V, x,_, ~ Y, Figure 2: The elements of the A and C matrices after learning are displayed graphically. A link is drawn from node k in Xt-l to node 1 in Xt iff -..L > E, and either _{31 > E or Ok 1 ...!... > E, for a small threshold E. Similarly links are drawn from node k of Xt to Yt if {31 > E. 01 k Therefore the graph shows the links that take part in the dynamics and the output. 7 Conclusions We have derived a general variational Bayesian learning algorithm for models in the conjugate-exponential family. There are a large number of interesting models that fall in this family, and the results in this paper should allow an almost automated protocol for implementing a variational Bayesian treatment of these models. We have given one example of such an implementation, state-space models, and shown that the VB algorithm can be used to rapidly infer the hidden state dimensionality. Using the theory laid out in this paper it is straightforward to generalise the algorithm to mixtures of SSMs, switching SSMs, etc. For conjugate-exponential models, integrating both belief propagation and the junction tree algorithm into the variational Bayesian framework simply amounts to computing expectations of the natural parameters. Moreover, the variational Bayesian algorithm contains EM as a special case. We believe this paper provides the foundations for a general algorithm for variational Bayesian learning in graphical models. References [1] H. Attias. A variational Bayesian framework for graphical models. In Advances in Neural Information Processing Systems 12. MIT Press, Cambridge, MA, 2000. [2] M.J. Beal and Z. Ghahramani. The variational Kalman smoother. Technical report, Gatsby Computational Neuroscience Unit, University College London, 2000. [3] C.M. Bishop. Variational PCA. In Proc. Ninth ICANN, 1999. [4] S. Friiwirth-Schnatter. Bayesian model discrimination and Bayes factors for linear Gaussian state space models. J. Royal. Stat. Soc. B, 57:237-246, 1995. [5] Z. Ghahramani and M.J. Beal. Variational inference for Bayesian mixtures of factor analysers. In Adv. Neur. Inf. Proc. Sys. 12. MIT Press, Cambridge, MA, 2000. [6] G.E. Hinton and D. van Camp. Keeping neural networks simple by minimizing the description length ofthe weights. In Sixth ACM Conference on Computational Learning Theory, Santa Cruz, 1993. [7] M.1. Jordan, Z. Ghahramani, T.S. Jaakkola, and L.K Saul. An introduction to variational methods in graphical models. Machine Learning, 37:183- 233, 1999. [8] D.J.C. MacKay. Ensemble learning for hidden Markov models. Technical report, Cavendish Laboratory, University of Cambridge, 1997. [9] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo, CA, 1988. [10] R. H. Shumway and D. S. Stoffer. An approach to time series smoothing and forecasting using the EM algorithm. J. Time Series Analysis, 3(4):253- 264, 1982. [11] S. Waterhouse, D.J.C. Mackay, and T. Robinson. Bayesian methods for mixtures of experts. In Adv. Neur. Inf. Proc. Sys. 7. MIT Press, 1995.
2000
69
1,871
A PAC-Bayesian Margin Bound for Linear Classifiers: Why SVMs work Ralf Herbrich Statistics Research Group Computer Science Department Technical University of Berlin ralfh@cs.tu-berlin.de Thore Graepel Statistics Research Group Computer Science Department Technical University of Berlin guru@cs.tu-berlin.de Abstract We present a bound on the generalisation error of linear classifiers in terms of a refined margin quantity on the training set. The result is obtained in a PAC- Bayesian framework and is based on geometrical arguments in the space of linear classifiers. The new bound constitutes an exponential improvement of the so far tightest margin bound by Shawe-Taylor et al. [8] and scales logarithmically in the inverse margin. Even in the case of less training examples than input dimensions sufficiently large margins lead to non-trivial bound values and for maximum margins to a vanishing complexity term. Furthermore, the classical margin is too coarse a measure for the essential quantity that controls the generalisation error: the volume ratio between the whole hypothesis space and the subset of consistent hypotheses. The practical relevance of the result lies in the fact that the well-known support vector machine is optimal w.r.t. the new bound only if the feature vectors are all of the same length. As a consequence we recommend to use SVMs on normalised feature vectors only a recommendation that is well supported by our numerical experiments on two benchmark data sets. 1 Introduction Linear classifiers are exceedingly popular in the machine learning community due to their straight-forward applicability and high flexibility which has recently been boosted by the so-called kernel methods [13]. A natural and popular framework for the theoretical analysis of classifiers is the PAC (probably approximately correct) framework [11] which is closely related to Vapnik's work on the generalisation error [12]. For binary classifiers it turned out that the growth function is an appropriate measure of "complexity" and can tightly be upper bounded by the VC (Vapnik-Chervonenkis) dimension [14]. Later, structural risk minimisation [12] was suggested for directly minimising the VC dimension based on a training set and an a priori structuring of the hypothesis space. In practice, e.g. in the case of linear classifiers, often a thresholded real-valued function is used for classification. In 1993, Kearns [4] demonstrated that considerably tighter bounds can be obtained by considering a scale-sensitive complexity measure known as the fat shattering dimension. Further results [1] provided bounds on the Growth function similar to those proved by Vapnik and others [14,6]. The popularity of the theory was boosted by the invention of the support vector machine (SVM) [13] which aims at directly minimising the complexity as suggested by theory. Until recently, however, the success of the SVM remained somewhat obscure because in PAC/VC theory the structuring of the hypothesis space must be independent of the training data in contrast to the data-dependence of the canonical hyperplane. As a consequence Shawe-Taylor et.al. [8] developed the luckiness framework, where luckiness refers to a complexity measure that is a function of both hypothesis and training sample. Recently, David McAllester presented some PAC-Bayesian theorems [5] that bound the generalisation error of Bayesian classifiers independently of the correctness of the prior and regardless of the underlying data distribution thus fulfilling the basic desiderata of PAC theory. In [3] McAllester's bounds on the Gibbs classifier were extended to the Bayes (optimal) classifier. The PAC-Bayesian framework provides a posteriori bounds and is thus closely related in spirit to the luckiness frameworkl . In this paper we give a tight margin bound for linear classifiers in the PAC-Bayesian framework. The main idea is to identify the generalisation error of the classifier h of interest with that of the Bayes (optimal) classifier of a (point-symmetric) subset Q that is summarised by h. We show that for a uniform prior the normalised margin of h is directly related to the volume of a large subset Q summarised by h. In particular, the result suggests that a learning algorithm for linear classifiers should aim at maximising the normalised margin instead of the classical margin. In Section 2 and 3 we review the basic PAC-Bayesian theorem and show how it can be applied to single classifiers. In Section 4 we give our main result and outline its proof. In Section 5 we discuss the consequences of the new result for the application of SVMs and demonstrate experimentally that in fact a normalisation of the feature vectors leads to considerably superior generalisation performance. We denote n-tuples by italic bold letters (e.g. x = (Xl, ... ,Xn )), vectors by roman bold letters (e.g. x), random variables by sans serif font (e.g. X) and vector spaces by calligraphic capitalised letters (e.g. X). The symbols P, E, I and .e~ denote a probability measure, the expectation of a random variable, the indicator function and the normed space (2-norm) of sequences of length n, respectively. 2 A PAC Margin Bound We consider learning in the PAC framework. Let X be the input space, and let y = {-1,+1}. Let a labelled training sample z = (x,y) E (X x y)m = zm be drawn iid according to some unknown probability measure Pz = PYIXPx. Furthermore for a given hypothesis space 1t ~ yX we assume the existence of a ''true'' hypothesis h * E 1t that labelled the data PYIX=x (y) = Iy=h*(x). (1) We consider linear hypotheses 1t = {hw: X f-t sign((w,¢(x)}x:;) I w E W}, W = {w E K IllwlllC = 1}, (2) lIn fact, even Shawe-Taylor et.al. concede that " ... a Bayesian might say that luckiness is just a complicated way of encoding a prior. The sole justification for our particular way of encoding is that it allows us to get the PAC like results we sought ... " [9, p. 4]. where the mapping ¢ : X ~ K ~ f~ maps2 the input data to some feature space K and Ilwll,>;; = 1 leads to a one-to-one correspondence of hypotheses hw to their parameters w. From the existence of h* we know that there exists a version space V(z) ~ W, V(z)={wEW IV(x,y)EZ: hw(x)=y}. Our analysis aims at bounding the true risk R [w] of consistent hypotheses hw, R[w] = P XY (hw (X):I Y) . Since all classifiers w E V (z) are indistinguishable in terms of number of errors committed on the given training set z let us introduce the concept of the margin 'Yz (w) of a classifier w, i.e. () . ydw, Xi),>;; 'Yz w = mIn (xi ,y;)Ez Ilwll,>;; (3) The following theorem due to Shawe-Taylor et al. [8] bounds the generalisation errors R [w] of all classifier wE V (z) in terms of the margin 'Yz (w). Theorem 1 (PAC margin bound). For all probability measures Pz such that Px (II¢ (X) II,>;; :::; ~) = 1, for any 8 > 0 with probability at least 1 - 8 over the random draw of the training set z, if we succeed in correctly classifying m samples z with a linear classifier w achieving a positive margin 'Yz (w) > J32~2 1m then the generalisation R [w] of w is bounded from above by As the bound on R [w] depends linearly on 'Y;2 (w) we see that Theorem 1 provides a theoretical foundation of all algorithms that aim at maximising 'Yz (w), e.g. SVMs and Boosting [13, 7]. 3 PAC-Bayesian Analysis We first present a result [5] that bounds the risk of the generalised Gibbs classification strategy Gibbsw(z) by the measure Pw (W (z)) on a consistent subset W (z) ~ V (z). This average risk is then related via the Bayes-Gibbs lemma to the risk ofthe Bayes classification strategy Bayesw(z) on W (z). For a single consistent hypothesis w E W it is then necessary to identify a consistent subset Q (w) such that the Bayes strategy BayesQ(w) on Q (w) always agrees with w. Let us define the Gibbs classification strategy Gibbsw(z) w.r.t. the subset W (z) ~ V (z) by Gibbsw(z) (x) = hw (x) , w '" PWIWEW(z) . (5) Then the following theorem [5] holds for the risk of Gibbsw(z). Theorem 2 (PAC-Bayesian bound for subsets of classifiers). For any measure Pw and any measure Pz, for any 8 > 0 with probability at least 1 - 8 over the random draw of the training set z for all subsets W (z) ~ V (z) such that Pw (W (z)) > 0 the generalisation error of the associated Gibbs classification strategy Gibbsw(z) is bounded from above by R [Gibbsw(z)] :::; ~ (In (PW (~(Z))) + 2ln (m) + In (~) + 1) . (6) 2For notational simplicity we sometimes abbreviate cf> (x) by x which should not be confused with the sample x of training objects. Now consider the Bayes classifier Bayesw(z), Bayesw(z) (x) = sign (EwIWEW(z) [hw (x)]) , where the expectation EWIWEW(z) is taken over a cut-off posterior given by combining the PAC-likelihood (1) and the prior Pw. Lemma 1 (Bayes-Gibbs Lemma). For any two measures Pw and PXY and any setW ~ W PXY (Bayesw (X) =I Y) :s; 2· PXY (Gibbsw (X) =I Y) . (7) Proof. (Sketch) Consider only the simple PAC setting we need. At all those points x E X at which Bayesw is wrong by definition at least half ofthe classifiers wE W under consideration make a mistake as well. D The combination of Lemma 1 with Theorem 2 yields a bound on the risk of Bayesw(z). For a single hypothesis wE W let us find a (Bayes-admissible) subset Q (w) of version space V (z) such that BayesQ(w) on Q (w) agrees with w on every point in X. Definition 1 (Bayes-admissibility). Given the hypothesis space in (2) and a prior measure Pw over W we call a subset Q (w) ~ W Bayes admissible w.r.t. w and Pw if and only if 'r/xEX: hw (x) = BayesQ(w) (x) . Although difficult to achieve in general the following geometrically plausible lemma establishes Bayes-admissibility for the case of interest. Lemma 2 (Bayes-admissibility for linear classifiers). For uniform measure Pw over W each ball Q (w) = {v E W Illw - vlliC :s; r} is Bayes admissible w.r.t. its centre w . Please note that by considering a ball Q (w) rather than just w we make use of the fact that w summarises all its neighbouring classifiers v E Q (w). Now using a uniform prior Pw the normalised margin (8) quantifies the relative volume of classifiers summarised by wand thus allows us to bound its risk. Note that in contrast to the classical margin '"Yz (see 3) this normalised margin is a dimensionless quantity and constitutes a measure for the relative size of the version space invariant under rescaling of both weight vectors w and feature vectors Xi. 4 A PAC-Bayesian Margin Bound Combining the ideas outlined in the previous section allows us to derive a generalisation error bound for linear classifiers w E V (z) in terms of their normalised margin r z (w). Figure 1: Illustration of the volume ratio for the classifier at the north pole. Four training points shown as grand circles make up version space the polyhedron on top of the sphere. The radius of the "cap" of the sphere is proportional to the margin r %, which only for constant Ilxill.~: is maximised by the SVM. Theorem 3 (PAC-Bayesian margin bound). Suppose K ~ f~ is a given feature space of dimensionality n. For all probability measures Pz, for any 8 > 0 with probability at least 1 - 8 over the random draw of the training set z, if we succeed in correctly classifying m samples z with a linear classifier w achieving a positive margin r % (w) > 0 then the generalisation error R [w] of w is bounded from above by ~(dln( 1 )+2In(m)+ln(~)+2) (9) m 1 - VI - r~ (w) u where d = min (m, n). Proof. Geometrically the hypothesis space W is the unit sphere in ~n (see Figure 1). Let us assume that Pw is uniform on the unit sphere as suggested by symmetry. Given the training set z and a classifier wall classifiers v E Q (w) Q (w) = {v E W I (w, v)K > Vl- r~ (w) } (10) are within V (z) (For a proof see [2]). Such a set Q (w) is Bayes-admissible by Lemma 2 and hence we can use Pw (Q (w» to bound the generalisation error of w. Since Pw is uniform, the value -In (Pw (Q (w») is simply the logarithm of the volume ratio between the surface of the unit sphere and the surface of all v fulfilling equation (10). In [2] it is shown that this ratio is exactly given by ( f;1r sinn - 2 (B) dB ) In rarccos( Vl-r;(w)). -2 . Jo smn (B) dB It can be shown that this ratio is tightly bounded from above by n In ( 1 ) + In (2) . 1- Vl- r~ (w) "'I--H-+-I __ I__ 1 H--l--l j -- --j--H+H p (a) 1· '1--1" . '!--j--j-_j __ j_-}_+++-j--j--I--l--}--j·+l p (b) Figure 2: Generalisation errors of classifiers learned by an SVM with (dashed line) and without (solid line) normalisation of the feature vectors Xi. The error bars indicate one standard deviation over 100 random splits of the data sets. The two plots are obtained on the (a) thyroid and (b) sonar data set. With In (2) < 1 we obtain the desired result. Note that m points maximally span an m- dimensional space and thus we can marginalise over the remaining n - m dimensions of feature space K . This gives d = min (m, n). 0 An appealing feature of equation (9) is that for r z (w) = 1 the bound reduces to ~ (21n (m) - In (8) + 2) with a rapid decay to zero as m increases. In case of margins r z (w) > 0.91 the troublesome situation of d = m, which occurs e.g. for RBF kernels, is compensated for. Furthermore, upper bounding 1/(1- vr=r') by 2/r we see that Theorem 3 is an exponential improvement of Theorem 1 in terms of the attained margins. It should be noted, however, that the new bound depends on the dimensionality of the input space via d = min (m, n). 5 Experimental Study Theorem 3 suggest the following learning algorithm: given a version space V (z) (through a given training set z) find the classifier w that maximises r z (w). This algorithm, however, is given by the SVM only if the training data in feature space K are normalised. We investigate the influence of such a normalisation on the generalisation error in the feature space K of all monomials up to the p-th degree (well-known from handwritten digit recognition, see [13]). Since the SVM learning algorithm as well as the resulting classifier only refer to inner products in K, it suffices to use an easy-to-calculate kernel function k : X X X -t IR such that for all x, x' EX, k (x, x') = (</> (x) ,</> (X')}JC' given in our case by the polynomial kernel VpE N: k (X,X') = ((x,x'h + l)P . Earlier experiment have shown [13] that without normalisation too large values of p may lead to "overfitting". We used the VCI [10] data sets thyroid (d = 5, m = 140, mtest = 75) and sonar (d = 60, m = 124, mtest = 60) and plotted the generalisation error of SVM solutions (estimated over 100 different splits of the data set) as a function of p (see Figure 2). As suggested by Theorem 3 in almost all cases the normalisation improved the performance of the support vector machine solution at a statistically significant level. As a consequence, we recommend: When training an SVM, always normalise your data in feature space. Intuitively, it is only the spatial direction of both weight vector and feature vectors that determines the classification. Hence the different lengths of feature vectors in the training set should not enter the SVM optimisation problem. 6 Conclusion The PAC-Bayesian framework together with simple geometrical arguments yields the so far tightest margin bound for linear classifiers. The role of the normalised margin r % in the new bound suggests that the SVM is theoretically justified only for input vectors of constant length. We hope that this result is recognised as a useful bridge between theory and practice in the spirit of Vapnik's famous statement: Nothing is more practical than a good theory Acknowledgements We would like to thank David McAllester, John ShaweTaylor, Bob Williamson, Olivier Chapelle, John Langford, Alex Smola and Bernhard SchOlkopf for interesting discussions and useful suggestions on earlier drafts. References [1) N. Alon, S. Ben-David, N. Cesa-Bianchi, and D. Haussler. Scale sensitive dimensions, uniform convergence and learnability. Journal of the ACM, 44(4}:615-631, 1997. [2) R. Herbrich. Learning Linear Classifiers - Theory and Algorithms. PhD thesis, Technische Universitat Berlin, 2000. accepted for publication by MIT Press. [3) R. Herbrich, T. Graepel, and C. Campbell. Bayesian learning in reproducing kernel Hilbert spaces. Technical report, Technical University of Berlin, 1999. TR 99-11. [4) M. J. Kearns and R. Schapire. Efficient distribution-free learning of probabilistic concepts. Journal of Computer and System Sciences, 48(2}:464-497, 1993. [5) D. A. McAllester. Some PAC Bayesian theorems. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, pages 230- 234, Madison, Wisconsin, 1998. [6) N. Sauer. On the density of families of sets. Journal of Combinatorial Theory, Series A, 13:145- 147, 1972. [7) R. E. Schapire, Y. Freund, P. Bartlett, and W. S. Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. In Proceedings of the 14- th International Conference in Machine Learning, 1997. [8) J. Shawe-Taylor, P. L. Bartlett, R. C. Williamson, and M. Anthony. Structural risk minimization over data-dependent hierarchies. IEEE Transactions on Information Theory, 44(5}:1926- 1940, 1998. [9) J. Shawe-Taylor and R. C. Williamson. A PAC analysis of a Bayesian estimator. Technical report, Royal Holloway, University of London, 1997. NC2- TR- 1997- 013. [10) UCI. University of California Irvine: Machine Learning Repository, 1990. [11) L. G. Valiant. A theory of the learnable. Communications of the ACM, 27(11}:11341142, 1984. [12) V. Vapnik. Estimation of Dependences Based on Empirical Data. Springer, 1982. [13) V. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995. [14) V. Vapnik and A. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Application, 16(2}:264- 281, 1971.
2000
7
1,872
An Adaptive Metric Machine for Pattern Classification Carlotta Domeniconi, Jing Peng+, Dimitrios Gunopulos Dept. of Computer Science, University of California, Riverside, CA 92521 + Dept. of Computer Science, Oklahoma State University, Stillwater, OK 74078 { carlotta, dg} @cs.ucr.edu, jpeng@cs.okstate.edu Abstract Nearest neighbor classification assumes locally constant class conditional probabilities. This assumption becomes invalid in high dimensions with finite samples due to the curse of dimensionality. Severe bias can be introduced under these conditions when using the nearest neighbor rule. We propose a locally adaptive nearest neighbor classification method to try to minimize bias. We use a Chi-squared distance analysis to compute a flexible metric for producing neighborhoods that are elongated along less relevant feature dimensions and constricted along most influential ones. As a result, the class conditional probabilities tend to be smoother in the modified neighborhoods, whereby better classification performance can be achieved. The efficacy of our method is validated and compared against other techniques using a variety of real world data. 1 Introduction In classification, a feature vector x = (Xl,···, Xqy E lRq, representing an object, is assumed to be in one of J classes {i}{=l' and the objective is to build classifier machines that assign x to the correct class from a given set of N training samples. The K nearest neighbor (NN) classification method [3, 5, 7, 8, 9] is a simple and appealing approach to this problem. Such a method produces continuous and overlapping, rather than fixed, neighborhoods and uses a different neighborhood for each individual query so that all points in the neighborhood are close to the query, to the extent possible. In addition, it has been shown [4, 6] that the one NN rule has asymptotic error rate that is at most twice the Bayes error rate, independent of the distance metric used. The NN rule becomes less appealing in finite training samples, however. This is due to the curse-of-dimensionality [2]. Severe bias can be introduced in the NN rule in a high dimensional input feature space with finite samples. As such, the choice of a distance measure becomes crucial in determining the outcome of nearest neighbor classification. The commonly used Euclidean distance measure, while simple computationally, implies that the input space is isotropic or homogeneous. However, the assumption for isotropy is often invalid and generally undesirable in many practical applications. In general, distance computation does not vary with equal strength or in the same proportion in all directions in the feature space emanating from the input query. Capturing such information, therefore, is of great importance to any classification procedure in high dimensional settings. In this paper we propose an adaptive metric classification method to try to minimize bias in high dimensions. We estimate a flexible metric for computing neighborhoods based on Chi-squared distance analysis. The resulting neighborhoods are highly adaptive to query locations. Moreover, the neighborhoods are elongated along less relevant feature dimensions and constricted along most influential ones. As a result, the class conditional probabilities tend to be constant in the modified neighborhoods, whereby better classification performance can be obtained. 2 Local Feature Relevance Measure Our technique is motivated as follows. Let Xo be the test point whose class membership we are predicting. In the one NN classification rule, a single nearest neighbor x is found according to a distance metric D(x, xo). Let p(jlx) be the class conditional probability at point x. Consider the weighted Chi-squared distance [8, 11] D( ) = ~ [Pr(jlx) - Pr(jlxoW X,Xo f=:. Pr(jlxo) , (1) which measures the distance between Xo and the point x, in terms of the difference between the class posterior probabilities at the two points. Small D(x, xo) indicates that the classification error rate will be close to the asymptotic error rate for one nearest neighbor. In general, this can be achieved when Pr(jlx) = Pr(jlxo), which states that if Pr(jlx) can be sufficiently well approximated at Xo, the asymptotic 1-NN error rate might result in finite sample settings. Equation (1) computes the distance between the true and estimated posteriors. Now, imagine we replace Pr(jlxo) with a quantity that attempts to predict Pr(jlx) under the constraint that the quantity is conditioned at a location along a particular feature dimension. Then, the Chi-squared distance (1) tells us the extent to which that dimension can be relied on to predict Pr(jlx). Thus, Equation (1) provides us with a foundation upon which to develop a theory of feature relevance in the context of pattern classification. Based on the above discussion, our proposal is the following. We first notice that Pr(jlx) is a function of x. Therefore, we can compute the conditional expectation of p(jlx), denoted by Pr(jlxi = z), given that Xi assumes value z, where Xi represents the ith component of x. That is, Pr(jlxi = z) = E[Pr(jlx)lxi = z] = J Pr(jlx)p(xlxi = z)dx. Here p(XIXi = z) is the conditional density of the other input variables. Let ri(x) = t [Pr(jlx) -.Pr(~Xi = Zi)]2 j=l Pr(J IXi - Zi) (2) ri(x) represents the ability offeature i to predict the Pr(jlx)s at Xi = Zi. The closer Pr(jlxi = Zi) is to Pr(jlx), the more information feature i carries for predicting the class posterior probabilities locally at x. We can now define a measure of feature relevance for Xo as 1 fi(XO) = K L ri(z), zEN(xo) (3) where N(xo) denotes the neighborhood of Xo containing the K nearest training points, according to a given metric. ri measures how well on average the class posterior probabilities can be approximated along input feature i within a local neighborhood of Xo. Small ri implies that the class posterior probabilities will be well captured along dimension i in the vicinity of Xo. Note that ri(xo) is a function of both the test point Xo and the dimension i, thereby making ri(xo) a local relevance measure. The relative relevance, as a weighting scheme, can then be given by the following exponential weighting scheme q Wi(XO) = exp(cRi(XO))/ L exp(cRl(XO)) (4) 1=1 where c is a parameter that can be chosen to maximize (minimize) the influence of ri on Wi, and Ri(X) = maxj rj(x) - ri(x). When c = 0 we have Wi = l/q, thereby ignoring any difference between the ri's. On the other hand, when c is large a change in ri will be exponentially reflected in Wi. In this case, Wi is said to follow the Boltzmann distribution. The exponential weighting is more sensitive to changes in local feature relevance (3) and gives rise to better performance improvement. Thus, (4) can be used as weights associated with features for weighted distance computation D(x, y) = V'L,r=1 Wi(Xi - Yi)2. These weights enable the neighborhood to elongate less important feature dimensions, and, at the same time, to constrict the most influential ones. Note that the technique is query-based because weightings depend on the query [1]. 3 Estimation Since both PrUlx) and Pr(jlxi = Zi) in (3) are unknown, we must estimate them using the training data {xn, Yn};;=1 in order for the relevance measure (3) to be useful in practice. Here Yn E {I, ... , J}. The quantity Pr(jlx) is estimated by considering a neighborhood Nl (x) centered at x: (5) where 1(·) is an indicator function such that it returns 1 when its argument is true, and 0 otherwise. To compute PrUlxi = z) = E[PrUlx)lxi = Z], we introduce a dummy variable gj such that if Y = j, then gj Ix = 1, otherwise gj Ix = 0, where j = 1,···, J. We then have PrUlx) = E[gjlx], from which it is not hard to show that PrUlxi = z) = E[gjlxi = z]. However, since there may not be any data at Xi = z, the data from the neighborhood of x along dimension i are used to estimate E[gj IXi = z], a strategy suggested in [7]. In detail, by noticing gj = l(y = j) the estimate can be computed from P A (.1 ) 'L,xn EN2(X) l(lxni - xii ~ boi)l(Yn = j) r J Xi = Zi = , 'L,xn EN2(X) l(lxni - xii ~ boi) (6) where N2 (x) is a neighborhood centered at x (larger than N1(x)), and the value of boi is chosen so that the interval contains a fixed number L of points: 'L,;;=1 1 (I Xni xii ~ boi )l(xn E N2(x)) = L. Using the estimates in (5) and in (6), we obtain an empirical measure of the relevance (3) for each input variable i. 4 Empirical Results In the following we compare several classification methods using real data: (1) Adaptive metric nearest neighbor (ADAMENN) method (one iteration) described above, coupled with the exponential weighting scheme (4); (2) i-ADAMENN - ADAMENN with five iterations; (3) Simple K-NN method using the Euclidean distance measure; (4) C4.5 decision tree method [12]; (5) Machete [7] - an adaptive NN procedure, in which the input variable used for splitting at each step is the one that maximizes the estimated local relevance (7); (6) Scythe [7] - a generalization of the Machete algorithm, in which the input variables influence each split in proportion to their estimated local relevance, rather than the winner-take-all strategy of Machete; (7) DANN - discriminant adaptive nearest neighbor classification [8]; and (8) i-DANN - DANN with five iterations [8]. In all the experiments, the features are first normalized over the training data to have zero mean and unit variance, and the test data are normalized using the corresponding training mean and variance. Procedural parameters for each method were determined empirically through cross-validation. Table 1: Average classification error rates. Iris Sonar Vowel Glass Image Seg Letter Liver Lung ADAMENN 3.0 9.1 10.7 24.8 5.2 2.4 5.1 30.7 40.6 i-ADAMENN 5.0 9.6 10.9 24.8 5.2 2.5 5.3 30.4 40.6 K-NN 6.0 12.5 11.8 28.0 6.1 3.6 6.9 32.5 50.0 C4.5 8.0 23.1 36.7 31.8 21.6 3.7 16.4 38.3 59.4 Machete 5.0 21.2 20.2 28.0 12.3 3.2 9.1 27.5 50.0 Scythe 4.0 16.3 15.5 27.1 5.0 3.3 7.2 27.5 50.0 DANN 6.0 7.7 12.5 27.1 12.9 2.5 3.1 30.1 46.9 i-DANN 6.0 9.1 21.8 26.6 18.1 3.7 6.1 27.8 40.6 Classification Data Sets. The data sets used were taken from the VCI Machine Learning Database Repository [10], except for the unreleased image data set. They are: 1. Iris data. This data set consists of q = 4 measurements made on each of N = 100 iris plants of J = 2 species; 2. Sonar data. This data set consists of q = 60 frequency measurements made on each of N = 208 data of J = 2 classes ("mines" and "rocks"); 3. Vowel data. This example has q = 10 measurements and 11 classes. There are total of N = 528 samples in this example; 4. Glass data. This data set consists of q = 9 chemical attributes measured for each of N = 214 data of J = 6 classes; 5. Image data. This data set consists of 40 texture images that are manually classified into 15 classes. The number of images in each class varies from 16 to 80. The images in this database are represented by q = 16 dimensional feature vectors; 6. Seg data. This data set consists of images that were drawn randomly from a database of 7 outdoor images. There are J = 7 classes, each of which has 330 instances. Thus, there are N = 2,310 images in the database. These images are represented by q = 19 real valued attributes; 7. Letter data. This data set consists of q = 16 numerical attributes and J = 26 classes; 8. Liver data. This data set consists of 345 instances, represented by q = 6 numerical attributes, and J = 2 classes; and 9. Lung data. This example has 32 instances having q = 56 numerical features and J = 3 classes. Results: Table 1 shows the (cross-validated) error rates for the eight methods under consideration on the nine real data sets. Note that the average error rates 4 ~ i I I I J: ~ , I IIIIfII ...... II1II z z Z Irl B " Z ~ ~ ~ Z <i " -5 ~ u -'" ,., :.d 1;! u '" Ci cz « ::E ~ 1 Figure 1: Performance distributions. for the Iris, Sonar, Glass, Liver and Lung data sets were based on leave-one-out cross-validation, whereas the error rates for the Vowel and Image data were based on ten two-fold cross-validation, and two ten-fold cross-validation for the Seg and Letter data, since larger data sets are available in these four cases. Table 1 shows clearly that ADAMENN achieved the best or near best performance over the nine real data sets, followed by i-ADAMENN. It seems natural to ask the question of robustness. That is, how well a particular method m performs on average in situations that are most favorable to other procedures. Following Friedman [7], we capture robustness by computing the ratio bm of its error rate em and the smallest error rate over all methods being compared in a particular example: bm = emf minl<k<8 ek. Thus, the best method m* for that example has bm • = 1, and all other methods have larger values bm ~ 1, for m :f. m*. The larger the value of bm , the worse the performance of the mth method is in relation to the best one for that example, among the methods being compared. The distribution of the bm values for each method m over all the examples, therefore, seems to be a good indicator of robustness. Fig. 1 plots the distribution of bm for each method over the nine data sets. The dark area represents the lower and upper quartiles of the distribution that are separated by the median. The outer vertical lines show the entire range of values for the distribution. It is clear that the most robust method over the data sets is ADAMENN. In 5/9 of the data its error rate was the best (median = 1.0). In 8/9 of them it was no worse than 18% higher than the best error rate. In the worst case it was 65%. In contrast, C4.5 has the worst distribution, where the corresponding numbers are 267%, 432% and 529%. Bias and Variance Calculations: For a two-class problem with Pr(Y = 11x) = p(x), we compute a nearest neighborhood at a query Xo and find the nearest neighbor X having class label Y(X) (random variable). The estimate of p(xo) is Y(X). The bias and variance of Y(X) are: Bias = Ep(X) - p(xo) and Var = Ep(X) (1 - Ep(X)), where the expectation is computed over the distribution of the nearest neighbor X [8]. We performed simulations to estimate the bias and variance of ADAMENN, KNN, DANN and Machete on the following two-class problem. There are q = 2 input features and 180 training data. Each class contains three spherical bivariate normal subclasses, having standard deviation 0.75. The means of the 6 subclasses are chosen at random without replacement from the integers [1,2, ... ,8] x [1,2, ... ,8]. For each class, data are evenly drawn from each of the normal subclasses. Fig. 2 shows the bias and variance estimates from each method at locations (5,5,0,· . ·,0) and (2.3,7,0,···,0), as a function of the number of noise variables over five independently generated training sets. Here the noise variables have independent standard Gaussian distributions. The true probability of class 1 for (5, 5,0,· . · ,0) and (2.3,7,0, · ··,0) are 0.943 and 0.747, respectively. The four methods have similar variance, since they all use three neighbors for classification. While the bias of KNN and DANN increases with increasing number of noise variables, ADAMENN retains a low bias by averaging out noise. 5 Related Work Friedman [7] describes an approach to learning local feature relevance that recursively homes in on a query along the most (locally) relevant dimension, where local relevance is computed from a reduction in prediction error given the query's value along that dimension. This method performs well on a number of classification tasks. In our notations, local relevance can be described by J I; (x) = 2: (Pr(j) - Pr(j IXi = Zi)])2, (7) j=l where Pr(j) represents the expected value of Pr(jlx). In this case, the most informative dimension is the one that deviates the most from Pr(j). 04 Adamenn Dann ---Machete r A ~ 025 11 02 I • s015 v: 01 005 16 20 No of NOIse Van abies (a) Test point=(5,5) 022 '--~-~A::;::d,men="---' 02 Dann; m I M'~'~. J Il~L 12 16 No of NOIse Vanablas (d) Test point=(2.3,7) ~ • ~ 024 022 02 018 016 014 012 01 Adam8lln Dann ---006 Macheta 004 0 12 ,. 20 No of NOise Variables (b) Test point=(5,5) No 01 NOise Vanables (e) Test point=(2.3,7) " 045 ~ 0 4 11 '" ~ " ~ g 02' i 02 ~damenn~ 01' O1 t 005 Machete0 12 ,. 20 No 01 NOise Variables (c) Test point=(5,5) 0 4 t'--~---A~d~-OO-"---' .. ~~: Machele -g 034 11 "2 f ~ "r tI 028 ~ 026 ~ 024 12 16 No 01 NOise Variables (f) Test point=(2.3,7) Figure 2: Bias and variance estimates. The main difference, however, between our relevance measure (3) and Friedman's (7) is the first term in the squared difference. While the class conditional probability is used in our relevance measure, its expectation is used in Friedman's. As a result, a feature dimension is more relevant than others when it minimizes (2) in case of our relevance measure, whereas it maximizes (7) in case of Friedman's. Furthermore, we take into account not only the test point Xo itself, but also its K nearest neighbors, resulting in a relevance measure (3) that is often more robust. In [8], Hastie and Tibshirani propose an adaptive nearest neighbor classification method based on linear discriminant analysis. The method computes a distance metric as a product of properly weighted within and between sum of squares matrices. They show that the resulting metric approximates the Chi-squared distance (1) by a Taylor series expansion. While sound in theory, the method has limitations. The main concern is that in high dimensions we may never have sufficient data to fill in q x q matrices. It is interesting to note that our work can serve as a potential bridge between Friedman's and that of Hastie and Tibshirani. 6 Summary and Conclusions This paper presents an adaptive metric method for effective pattern classification. This method estimates a flexible metric for producing neighborhoods that are elongated along less relevant feature dimensions and constricted along most influential ones. As a result, the class conditional probabilities tend to be more homogeneous in the modified neighborhoods. The experimental results show clearly that the ADAMENN algorithm can potentially improve the performance of K-NN and recursive partitioning methods in some classification problems, especially when the relative influence of input features changes with the location of the query to be classified in the input feature space. The results are also in favor of ADAMENN over similar competing methods such as Machete and DANN. References [1] Atkeson, C., Moore, A.W., and Schaal, S. (1997). "Locally Weighted Learning," AI Review. 11:11-73. [2] Bellman, RE. (1961). Adaptive Control Processes. Princeton Univ. Press. [3] Cleveland, W.S. and Devlin, S.J. (1988). "Locally Weighted Regression: An Approach to Regression Analysis by Local Fitting," J. Amer. Statist. Assoc. 83, 596-610. [4] Cover, T.M. and Hart, P.E. (1967). "Nearest Neighbor Pattern Classification," IEEE Trans. on Information Theory, pp. 21-27. [5] Domeniconi, C., Peng, J., and Gunopulos, D. (2000). "Adaptive Metric Nearest Neighbor Classification," Proc. of IEEE Conf. on CVPR, pp. 517-522, Hilton Head Island, South Carolina. [6] Duda, RO. and Hart, P.E. (1973). Pattern Classification and Scene Analysis. John Wiley & Sons, Inc .. [7] Friedman, J.H. (1994). "Flexible Metric Nearest Neighbor Classification," Tech. Report, Dept. of Statistics, Stanford University. [8] Hastie, T. and Tibshirani, R. (1996). "Discriminant Adaptive Nearest Neighbor Classification", IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 18, No. 6, pp. 607-615. [9] Lowe, D.G. (1995). "Similarity Metric Learning for a Variable-Kernel Classifier," Neural Computation 7(1):72-85. [10] Merz, C. and Murphy, P. (1996). UCI Repository of Machine Learning databases. http://www.ics.uci.edu/mlearn/MLRepository.html. [11] Myles, J.P. and Hand, D.J. (1990). "The Multi-Class Metric Problem in Nearest Neighbor Discrimination Rules," Pattern Recognition, Vol. 23, pp. 1291-1297. [12] Quinlin, J.R (1993). C4.5: Programs for Machine Learning. Morgan-Kaufmann Publishers, Inc ..
2000
70
1,873
On iterative Krylov-dogleg trust-region steps for solving neural networks nonlinear least squares problems Eiji Mizutani Department of Computer Science National Tsing Hua University Hsinchu, 30043 TAIWAN R.O.C. eiji@wayne.cs.nthu.edu.tw James w. Demmel Mathematics and Computer Science University of California at Berkeley, Berkeley, CA 94720 USA demmel@cs.berkeley.edu Abstract This paper describes a method of dogleg trust-region steps, or restricted Levenberg-Marquardt steps, based on a projection process onto the Krylov subspaces for neural networks nonlinear least squares problems. In particular, the linear conjugate gradient (CG) method works as the inner iterative algorithm for solving the linearized Gauss-Newton normal equation, whereas the outer nonlinear algorithm repeatedly takes so-called "Krylov-dogleg" steps, relying only on matrix-vector multiplication without explicitly forming the Jacobian matrix or the Gauss-Newton model Hessian. That is, our iterative dogleg algorithm can reduce both operational counts and memory space by a factor of O(n) (the number of parameters) in comparison with a direct linear-equation solver. This memory-less property is useful for large-scale problems. 1 Introduction We consider the so-called n eural networks nonlinear least squares problem 1 wherein the objective is to optimize the n weight parameters of neural networks (NN) [e.g., multilayer perceptrons (MLP)]' denoted by an n-dimensional vector 8 , by minimizing the following: (1) where ap (8) is the MLP output for the pth training data pattern and tp is the desired output. (Of course, these become vectors for a multiple-output MLP.) Here r(8) denotes the m-dimensional residual vector composed of ri(8) , i = 1, ... , m, for all m training data. 1The posed problem can be viewed as an implicitly constrained optimization problem as long as hidden-node outputs are produced by sigmoidal "squashing" functions [1]. Our algorithm exploits the special structure of the sum of squared error measure in Equation (1); hence, the other objective functions are outside the scope of this paper. The gradient vector and Hessian matrix are given by g = g(9) == JT rand H = H( 9) == JT J +S, where J is the m x n Jacobian matrix of r, and S denotes the matrix of second-derivative terms. If S is simply omitted based on the "small residual" assumption, then the Hessian matrix reduces to the Gauss-Newton model Hessian: i.e., JT J. Furthermore, a family of quasi-Newton methods can be applied to approximate term S alone, leading to the augmented Gauss-Newton model Hessian (see, for example, Mizutani [2] and references therein). With any form of the aforementioned Hessian matrices, we can collectively write the following Newton formula to determine the next step lj in the course of the Newton iteration for 9next = 9now + lj: Hlj = -g. (2) This linear system can be solved by a direct solver in conjunction with a suitable matrix factorization. However, typical criticisms towards the direct algorithm are: • It is expensive to form and solve the linear equation (2), which requires O(mn2 ) operations when m > n; • It is expensive to store the (symmetric) Hessian matrix H, which requires n(n2+1) memory storage. These issues may become much more serious for a large-scale problem. In light of the vast literature on the nonlinear optimization, this paper describes how to alleviate these concerns, attempting to solve the Newton formula (2) approximately by iterative methods, which form a family of inexact (or truncated) Newton methods (see Dembo & Steihaug [3], for instance). An important subclass ofthe inexact Newton methods are Newton-Krylov methods. In particular, this paper focuses on a Newton-CG-type algorithm, wherein the linear Gauss-Newton normal equation, (3) is solved iteratively by the linear conjugate gradient method (known as CGNR) for a dogleg trust-region implementation of the well-known Levenberg-Marquardt algorithm; hence, the name "dogleg trust-region Gauss-Newton-CGNR" algorithm, or "iterative Krylov-dogleg" method (similar to Steihaug [4]; Toint [5]). 2 Direct Dogleg Trust-Region Algorithms In the NN literature, several variants of the Levenberg-Marquardt algorithm equipped with a direct linear-equation solver, particularly Marquardt's original method, have been recognized as instrumental and promising techniques; see, for example, Demuth & Beale [6]; Masters [7]; Shepherd [8]. They are based on a simple direct control ofthe Levenberg-Marquardt parameter J.L in (H+J.LI)lj = -g, although such a simple J.L-control can cause a number of problems, because of a complicated relation between parameter J.L and its associated step length (see Mizutani [9]). Alternatively, a more efficient dogleg algorithm [10] can be employed that takes, depending on the size of trust region R, the Newton step ljNewton [i.e., the solution of Eq. (2)], the (restricted) Cauchy step ljCauchy' or an intermediate dogleg step: dcl ( ) ljdogleg = ljCauchy + h ljNewton - ljCauchy , (4) which achieves a piecewise linear approximation to a trust-region step, or a restricted Levenberg-Marquardt step. Note that ljCauchy is the step that minimizes the local quadratic model in the steepest descent direction (i.e. , Eq. (8) with k = 1). For details on Equation (4) , refer to Powell [10]; Mizutani [9, 2]. When we consider the Gauss-Newton step for 8Newton in Equation (4), we must solve the overdetermined linear least squares problem: minimize8 Ilr + J8112, for which three principal direct linear-equation solvers are: (1) Normal equation approach (typically with Cholesky decomposition); (2) QR decomposition approach to J 8 = -r; (3) Singular value decomposition (SVD) approach to J8 = -r (only recommended when J is nearly rank-deficient). Among those three direct solvers, approach (1) to Equation (3) is fastest. (For more details, refer to Demmel [11], Chapters 2 and 3.) In a highly overdetermined case (with a large data set; i.e. , m » n) , the dominant cost in approach (1) is the mn2 operations to form the Gauss-Newton model Hessian by: m JTJ = LU;U;' (5) ;=1 where uT is the ith row vector of J. This cost might be prohibitive even with enough storage for JT J. Therefore, to overcome this limitation of direct solvers for Equation (3), we consider an iterative scheme in the next section. 3 Iterative Krylov-Dogleg Algorithm The iterative Krylov-dogleg step approximates a trust-region step by iteratively approximating the Levenberg-Marquardt trajectory in the Krylov subspace via linear conjugate gradient iterates until the approximate trajectory hits the trustregion boundary; i.e., a CG iterate falls outside the trust-region boundary. In this context, the linear CGNR method is not intended to approximate the full GaussNewton step [i.e. , the solution of Eq. (3)]. Therefore, the required number of CGNRiterations might be kept small [see Section 4]. The iterative process for the linear-equation solution sequence {8 k } is called the inner 2 iteration, whereas the solution sequence {(h} from the Krylov-dogleg algorithm is generated by the outer iteration (or epoch), as shown in Figure 1. We now describe the inner iteration algorithm, which is identical to the standard linear CG algorithm (see Demmel [11], pages 311-312) except steps 2, 4, and 5: Algorithm 3.1 : The inner iteration of the Krylov-dogleg algorithm (see Figure 1). 1. Initialization: 80 = 0; do = ro = -gnow, and k = 1. (6) 2. Matrix-vector product (compare Eq. (5) and see Algorithm 3.2): m z = Hnowdk = J~ow(Jnowdk) = L(uT dk)u;. (7) ;=1 2Nonlinear conjugate gradient methods, such as Polak-Ribiere's CG (see Mizutani and Jang [13]) and Moller's scaled CG [14], are also widely-employed for training MLPs, but those nonlinear versions attempt to approximate the entire Hessian matrix by generating the solution sequence {Ih} directly as the outer nonlinear algorithm. Thus, they ignore the special structure of the nonlinear least squares problem; so does Pearlmutter's method [15] to the Newton formula, although its modification may be possible. c o .~ ..... 2 ..... Q) "S o Initialize Rn~. 90~ Compute E( 9000 ) Does stopping criteria hold? YES ~ >-----~ NO r'-';'~~'~';"';t~;~~;~'~""'1 i Algorithm 3. t i L ..................................... J IF E( 90", ) ~ E( 90~) YES algorithm :. .......................... : IF Vnow ~ Vsmall Algorithm for local-model check Figure 1: The algorithmic flow of an iterative Krylov-dogleg algorithm. For detailed procedures in the three dotted rectangular boxes, refer to Mizutani and Demmel [12} and Algorithm 3. 1 in text. rL1rk- l 3. Analytical step size: 'fJk = dTz k 4. Approximate solution: li k = li k - 1 + 'fJk d k. If Illikll < Rnow, then go onto the next step 5; otherwise compute lik lik = Rnowillikll ' and terminate. 5. Linear-system residual: r k = r k-l 'fJkZ. If IIrkl12 is small enough, then set Rnow f- Illikll. and terminate. Otherwise, continue with step 6. rI r k 6. Improvement: {3k+l = rT r . k-l k-l (8) (9) 7. Search direction: dk+1 = rk + {3k+l dk. Then, set k = k + 1 and back to step 2. The first step given by Equation (8) is always the Cauchy step I5Cauchy ' moving 9now to the Cauchy point 9Cauchy when Rnow > III5Cauchyll . Then, departing from 9 Cauchy , the linear CG constructs a Krylov-dogleg trajectory (by adding a CG point one by one) towards the Gauss-Newton point 9Newton until the constructed trajectory hits the trust-region boundary (i.e., Ill5k ll :::: Rnow is satisfied in step 4), or till the linear-system residual becomes small in step 5 (unlikely to occur for small forcing terms; e.g., 0.01) . In this way, the algorithm computes a vector between the steepest descent direction and the Gauss-Newton direction, resulting in an approximate Levenberg-Marquardt step in the Krylov subspace. In step 2, the matrix-vector multiplication of Hdk in Equation (7) can be performed with neither the Jacobian nor Hessian matrices explicitly required, keeping only several n-dimensional vectors in memory at the same time, as shown next: Algorithm 3.2: Matrix-vector multiplication step. for i = 1 to m; i.e., one sweep of all training data: (a) do forward propagation to compute the MLP output a; (9) for datum i; (b) do backpropagation 3 to obtain the ith row vector uT of matrix J; (c) compute (uT dk)u; and add it to z; end for. For one sweep of all m data, each of steps (a) and (b) costs at least 2mn (plus additional costs that depend on the MLP architectures) and step (c) [i.e., Eq. (7)] costs 4mn. Hence, the overall cost of the inner iteration (Algorithm 3.1) can be kept as O(mn), especially when the number of inner iterations is small owing to our strategy of upper-bounded trust-region radii (e.g., Rupper = 1 for the parity problem). Note for "Algorithm for local-model check" in Figure 1 that evaluating Vnow (a ratio between the actual error reduction and the reduction predicted by the current local quadratic model) needs a procedure similar to Algorithm 3.2. For more details on the algorithm in Figure 1, refer to Mizutani and Demmel [12] . 4 Experiments and Discussions In the NN literature, there are numerous algorithmic comparisons available (see, for example, Moller [14] ; Demuth & Beale [6] ; Shepherd [8] ; Mizutani [2,9, 16]). Due to the space limitation, this section compares typical behaviors of our Krylov-dogleg Gauss-Newton CGNR (or iterative dogleg) algorithm and Powell's dogleg-based algorithm with a direct linear-equation solver (or direct dogleg) for solving highly overdetermined parity problems. In our numerical tests, we used a criterion, in which the MLP output for the pth pattern, ap , can be regarded as either "on" (1.0) if ap :::: 0.8, or "off" (-1.0) if ap :S -0.8; otherwise, it is "undecided." The initial parameter set was randomly generated in the range [-0.3,0.3]' and the two algorithms started exactly at the same point in the parameter space. Figure 2 presents MLP-Iearning curves in RMSE (root mean squared error) for the 20-bit and 14-bit parity problems. In (b) and (c), the total execution time [roughly (b) 32 days (500 epochs); (c) two hours (450 epochs), both on 299-MHz UltraSparc] of the direct dogleg algorithm was normalized for comparison purpose. Notably, the 3The batch-mode MLP backpropagation can be viewed as an efficient matrix-vector multiplication (2mn operations) for computing the graclient .JTr wilhoutfor'ming explicitly the m X n Jacobian matrix or the m-climensional residual vector (with some extra costs) . 1 iterative dogleg . direct dogleg 0.8 ~ 0.6 ::2 a: 0.4 0.2 oL---~~------~ 1 .. .1- iterative dogleg I .j .... direct dogleg 0.8 ~ 0.6 ::2 a: 0.4 0.2 ~ O~----------~~ o 500 (a) Epoch 1000 o 0.5 (b) Normalized exec. time 0.8 ~ 0.6 ::2 a: 0.4 0.2 ..1- iterative dogleg I ·.1 .... direct dogleg oL-~======--~~ o 0.5 (c) Normalized exec. time Figure 2: MLP-learning curves of RMSE (root mean squared error) obtained by the "iterative dogleg" (solid line) and the "direct dogleg" (broken line): (a) "epoch" and (b) "normalized execution time" for the 20-bit parity problem with a standard 20 x 19 x 1 MLP with hyperbolic tangent node functions (m = 220 , n = 419), and (c) "normalized execution time" for the 14-bit parity problem with a 14 x 13 x 1 MLP (m = 214, n = 209). In ( a), (b), the iterative dogleg reduced the number of incorrect patterns down to 21 (nearly RMSE = 0.009) at epoch 838, whereas the direct dogleg reached the same error level at epoch 388. In (c), the iterative dogleg solved it perfectly at epoch 1,034 and the direct dogleg did so at epoch 401. iterative dogleg converged faster to a small RMSE 4 than the direct dogleg at an early stage of learning even with respect to epoch. Moreover, the average number of inner CG iterations per epoch in the iterative dogleg algorithm was quite small, 5.53 for (b) and 4.61 for (c). Thus, the iterative dogleg worked nearly (b) nine times and (c) four times faster than the direct dogleg in terms of the average execution time per epoch. Those speed-up ratios became smaller than n mainly due to the aforementioned cost of Algorithm 3.2. Yet, as n increases, the speed-up ratio can be larger especially when the number of inner iterations is reasonably small. 5 Conclusion and Future Directions We have compared two batch-mode MLP-Iearning algorithms: iterative and direct dogleg trust-region algorithms. Although such a high-dimensional parity problem is very special in the sense that it involves a large data set but the size of MLP can be kept relatively small, the algorithmic features of the two dogleg methods can be well understood from the obtained experimental results. That is, the iterative dogleg has the great advantage of reducing the cost of an epoch from O(mn2 ) to O(mn), and the memory requirements from O(n2 ) to O(n), a factor of O(n) in both cases. When n is large, this is a very large improvement. It also has the advantage offaster convergence in the early epochs, achieving a lower RMSE after fewer epochs than the direct dogleg. Its disadvantage is that it may need more epochs to converge to a very small RMSE than the direct dogleg (although it might work faster in execution time). Thus, the iterative dogleg is most attractive when attempting to achieve a reasonably small RMSE on very large problems in a short period of time. The iterative dogleg is a matrix-free algorithm that extracts information about the Hessian matrix via matrix-vector multiplication; this algorithm might be characterized as iterative batch-mode learning, an intermediate between direct batch4 A standard steepest descent-type online pattern-by-pattern learning (or incremental gradient) algorithm (with or without a momentum term) failed to converge to a small RMSE in those parity problems due to hidden-node satl.lmtion [1]. mode learning and online pattern-by-pattern learning. Furthermore, the algorithm might be implemented in a block-by-block updating mode if a large data set can be split into multiple proper-size data blocks; so, it would be of our great interest to compare the performance with online-mode learning algorithms for solving large-scale real-world problems with a large-scale NN model. Acknowledgments We would like to thank Stuart Dreyfus (lEaR, UC Berkeley) and Rich Vuduc (CS, UC Berkeley) for their valuable advice. The work was supported in part by SONY US Research Labs., and in part by "Program for Promoting Academic Excellence of Universities," grant 89-E-FA04-1-4, Ministry of Education, Taiwan. References [1] E. Mizutani, S. E. Dreyfus, and J.-S. R. Jang. On dynamic programming-like recursive gradient formula for alleviating hidden-node satuaration in the parity problem. In Proceedings of the International Workshop on Intelligent Systems Resolutions - the 8th Bellman Continuum, pages 100- 104, Hsinchu, TAIWAN, 2000. [2] Eiji Mizutani. Powell's dogleg trust-region steps with the quasi-Newton augmented Hessian for neural nonlinear least-squares learning. In Pr'oceedings of the IEEE Int'l Conf. on Neural Networks (vol.2), pages 1239-1244, Washington, D.C., JuJy 1999. [3] R. S. Dembo and T. Steihaug. Truncated-Newton algorithms for large-scale unconstrained optimization. Math. Prog., 26:190-212, 1983. [4] Trond Steihaug. The conjugate gradient method and trust regions in large scale optimization. SIAM J. Numer. Anal., 20(3):626- 637, 1983. [5] P. L. Toint. On large scale nonlinear least squares calculations. SIAM J. Sci. Statist. Comput., 8(3):416- 435, 1987. [6] H. Demuth and M. Beale. Neural Network Toolbox ror Use with MATLAB. The MathWorks, Inc., Natick, Massachusetts, 1998. User's Guide (version 3.0). [7] Timothy Masters. Advanced algorithms for neural networ'ks: a C++ sourcebook. John Wiley & Sons, New York, 1995. [8] Adrian J. Shepherd. Second-Order Methods for Neural Networks: Fast and Reliable Training Methods for Multi-Layer Perceptrons. Springer-Verlag, 1997. [9] Eiji Mizutani. Computing Powell's dogleg steps for solving adaptive networks nonlinear least-squares problems. In Proc. of the 8th Tnt'l Fuzzy Systems Association World Congress (IFSA '99), vol.2, pages 959- 963, Hsinchu, Taiwan, August 1999. [10] M. J. D. Powell. A new algorithm for unconstrained optimization. In Nonlinear Pr'ogramming, pages 31-65. Edited by J.B. Rosen et al., Academic Press, 1970. [11] James W. Demmel. Applied Numerical Linear Algebra. SIAM, 1997. [12] Eiji Mizutani and James W. Demmel. On generalized dogleg trust-region steps using the Krylov subspace for solving neural networks nonlinear least squares problems. Technical report, Computer Science Dept., UC Berkeley, 2001. (In preparation). [13] E. Mizutani and J.-S. R. Jang. Chapter 6: Derivative-based Optimization. In NeuroFuzzy and Soft Computing, pages 129- 172. J.-S. R. Jang, C.-T. Sun and E. Mizutani. Prentice Hall, 1997. [14] Martin Fodslette Moller. A scaled conjugate gradient algorithm for fast supervised learning. Neural Networ'ks, 6:525-533, 1993. [15] B. A. Pearlmutter. Fast exact multiplication by the Hessian. Neural Computation, 6(1):147-160, 1994. [16] E. Mizutani, K. Nishio, N. Katoh, and M. Blasgen. Color device characterization of electronic cameras by solving adaptive networks nonlinear least squares problems. In Proc. of the 8th IEEE Int'l Conf. on Fuzzy Systems, vol. 2, pages 858- 862, 1999.
2000
71
1,874
Stability and noise in biochemical switches William Bialek NEC Research Instit ute 4 Independence Way Princeton, New Jersey 08540 bialek@research. nj. nec. com Abstract Many processes in biology, from the regulation of gene expression in bacteria to memory in the brain, involve switches constructed from networks of biochemical reactions. Crucial molecules are present in small numbers, raising questions about noise and stability. Analysis of noise in simple reaction schemes indicates that switches stable for years and switchable in milliseconds can be built from fewer than one hundred molecules. Prospects for direct tests of this prediction, as well as implications, are discussed. 1 Introduction The problem of building a reliable switch arises in several different biological contexts. The classical example is the switching on and off of gene expression during development [1], or in simpler systems such as phage .A [2]. It is likely that the cell cycle should also be viewed as a sequence of switching events among discrete states, rather than as a continuously running clock [3]. The stable switching of a specific class of kinase molecules between active and inactive states is believed to playa role in synaptic plasticity, and by implication in the maintenance of stored memories [4] . Although many details of mechanism remain to be discovered, these systems seem to have several common features. First, the stable states of the switches are dissipative, so that they reflect a balance among competing biochemical reactions. Second, the total number of molecules involved in the construction of the switch is not large. Finally, the switch, once flipped, must be stable for a time long compared to the switching time, perhaps- for development and for memory- even for a time comparable to the life of the organism. Intuitively we might expect that systems with small numbers of molecules would be subject to noise and instability [5], and while this is true we shall see that extremely stable biochemical switches can in fact be built from a few tens of molecules. This has interesting implications for how we think about several cellular processes, and should be testable directly. Many biological molecules can exist in multiple states, and biochemical switches use this molecular multistability so that the state of the switch can be 'read out' by sampling the states (or enzymatic activities) of individual molecules. Nonetheless, these biochemical switches are based on a network of reactions, with stable states that are collective properties of the network dynamics and not of any individual molecule. Most previous work on the properties of biochemical reaction networks has involved detailed simulation of particular kinetic schemes [6], for example in discussing the kinase switch that is involved in synaptic plasticity [7]. Even the problem of noise has been discussed heuristically in this context t8]. The goal in the present analysis is to separate the problem of noise and stability from other issues, and to see if it is possible to make some general statements about the limits to stability in switches built from a small number of molecules. This effort should be seen as being in the same spirit as recent work on bacterial chemotaxis, where the goal was to understand how certain features of the computations involved in signal processing can emerge robustly from the network of biochemical reactions, independent of kinetic details [9]. 2 Stochastic kinetic equations Imagine that we write down the kinetic equations for some set of biochemical reactions which describe the putative switch. Now let us assume that most of the reactions are fast, so that there is a single molecular species whose concentration varies more slowly than all the others. Then the dynamics of the switch essentially are one dimensional, and this simplification allows a complete discussion using standard analytical methods. In particular, in this limit there are general bounds on the stability of switches, and these bounds are independent of (incompletely known) details in the biochemical kinetics. It should be possible to make progress on multidimensional versions of the problem, but the point here is to show that there exists a limit in which stable switches can be built from small numbers of molecules. Let the number of molecules of the 'slow species' be n. All the different reactions can be broken into two classes: the synthesis of the slow species at a rate f(n) molecules per second, and its degradation at a rate g(n) molecules per second; the dependencies on n can be complicated because they include the effects of all other species in the system. Then, if we could neglect fluctuations, we would write the effective kinetic equation dn dt = f(n) - g(n). (1) If the system is to function as a switch, then the stationarity condition f(n) = g(n) must have multiple solutions with appropriate local stability properties. The fact that molecules are discrete units means that we need to give the chemical kinetic Eq. (1) another interpretation. It is the mean field approximation to a stochastic process in which there is a probability per unit time f(n) of making the transition n ---+ n+1, and a probability per unit time g(n) of the opposite transition n ---+ n - 1. Thus if we consider the probability P(n, t) for there being n molecules at time t, this distribution obeys the evolution (or 'master') equation ap~~ , t) = f (n - l )P(n - 1, t) + g(n + l)P(n + 1, t) - [f(n) + g(n)]P(n, t),(2) with obvious corrections for n = 0, 1. We are interested in the effects of stochasticity for n not too small. Then 1 is small compared with typical values of n, and we can approximate P(n, t) as being a smooth function of n. We can expand Eq. (2) in derivatives of the distribution, and keep the leading terms: ap~~, t) = :n { [g(n) - f(n)]P(n, t) + ~ :n [f(n) + g(n)]P(n, t)} . (3) This is analogous to the diffusion equation for a particle moving in a potential, but this analogy works only if allow the effective temperature to vary with the position of the particle. As with diffusion or Brownian motion, there is an alternative to the diffusion equation for P( n, t) and this is to write an equation of motion for n( t) which supplements Eq. (1) by the addition of a random or Langevin force ~(t): dn dt \~(t)~(t') ) f(n) - g(n) + ~(t), [f(n) + g(n)]b(t - t'). (4) (5) From the Langevin equation we can also develop the distribution functional for the probability of trajectories n(t). It should be emphasized that all of these approaches are equivalent provided that we are careful to treat the spatial variations of the effective temperature [10].1 In one dimension this complication does not impede solving the problem. For any particular kinetic scheme we can compute the effective potential and temperature, and kinetic schemes with multiple stable states correspond to potential functions with multiple minima. 3 Noise induced switching rates We want to know how the noise term destabilizes the distinct stable states of the switch. If the noise is small, then by analogy with thermal noise we expect that there will be some small jitter around the stable states, but also some rate of spontaneous jumping between the states, analogous to thermal activation over an energy barrier as in a chemical reaction. This jumping rate should be the product of an "attempt frequency"-of order the relaxation rate in the neighborhood of one stable stateand a "Boltzmann factor" that expresses the exponentially small probability of going over the barrier. For ordinary chemical reactions this Boltzmann factor is just exp(-Ft /kBT), where Ft is the activation free energy. If we want to build a switch that can be stable for a time much longer than the switching time itself, then the Boltzmann factor has to provide this large ratio of time scales. There are several ways to calculate the analog of the Boltzmann factor for the dynamics in Eq. (4). The first step is to make more explicit the analogy with Brownian motion and thermal activation. Recall that Brownian motion of an overdamped particle is described by the Langevin equation dx "( dt = -V'(x) + 'T/(t) , (6) where,,( is drag coefficient of the particle, V(x) is the potential, and the noise force has correlations \'T/(t)'T/(t') ) = 2"(Tb(t - t') , where T is the absolute temperature measured in energy units so that Boltzmann's constant is equal to one. Comparing with Eq. (4), we see that our problem is equivalent to a particle with "( = 1 in an effective potential If,,ff(n) such that V:ff(n) = g(n) - f(n), at an effective temperature Teff(n) = [f(n) + g(n)]/2. If the temperature were uniform then the equilibrium distribution of n would be Peq(n) ex exp[-Veff(n)/Teff]' With nonuniform temperature the result is (up to lIn a review written for a biological audience, McAdams and Arkin [11] state that Langevin methods are unsound and can yield invalid predictions precisely for the case of bistable reaction systems which interests us here; this is part of their argument for the necessity of stochastic simulation methods as opposed to analytic approaches. Their reference for the failure of Langevin methods [12], however, seems to consider only Langevin terms with constant spectral density, thus ignoring (in the present language) the spatial variations of effective temperature. For the present problem this would mean replacing the noise correlation function [f(n) + g(n)]8(t - t') in Eq. (5) by Q8(t - t') where Q is a constant. This indeed is wrong, and is not equivalent to the master equation. On the other hand, if the arguments of Refs. [11, 12] were generally correct, they would imply that Langevin methods could not used for the description of Brownian motion with a spatially varying temperature, and this would be quite a surprise. weakly varying prefactors) ex exp[-U(n)] U(n) r dy V:ff(y). Jo Teff (Y) (7) (8) One way to identify the Boltzmann factor for spontaneous switching is then to compute the relative equilibrium occupancy of the stable states (no and nd and the unstable "transition state" at n*. The result is that the effective activation energy for transitions from a stable state at n = no to the stable state at n = nl > no is t ----+ _ In. g(n) - f(n) F (no nl) - 2kBT no dn g(n) + f(n)' (9) where n* is the unstable point, and similarly for the reverse transition, t ----+ _ ln 1 f(n) - g(n) F (nl no) - 2kBT n. dn g(n) + f(n)' (10) An alternative approach is to note that the distribution of trajectories n(t) includes locally optimal paths that carry the system from each stable point up to the transition state; the effective activation free energy can then be written as an integral along these optimal paths. The use of optimal path ideas in chemical kinetics has a long history, going back at least to Onsager. A discussion in the spirit of the present one is Ref. [13]. For equations of the general form ~: = - V:ff(n) + ~(t), (11) with (~(t)~(t')) = 2Teff(t)J(t-t'), the probability distribution for trajectories P[n(t)] can be written as [10] exp (-S[n(t)]) P [n(t)] S [n(t)] ~ J dtTe~ (t) [n(t) + V:ff (n(t))]2 - ~ J dtV:~(n(t)). (12) (13) If the temperature Teff is small, then the trajectories that minimize the action should be determined primarily by minimizing the first term in Eq. (13), which is "-' I/Teff. Identifying the effective potential and temperature as above, the relevant term is ~ J dt [n - f(n) + g(n)]2 2 f(n) + g(n) 1 Jd n2 1 Jd [f(n) - g(n)j2 t + t -'-'---::-':--'-:------'---'c-'--;---2 f(n) + g(n) 2 f(n) + g(n) - J dtn f(n) - g(n) . (14) f(n) + g(n) We are searching for trajectories which take n(t) from a stable point no where f(no) = g(no) through the unstable point n* where f and 9 are again equal but the derivative of their difference (the curvature of the potential) has changed sign. For a discussion of the analogous quantum mechanical problem of tunneling in a double well, see Ref. [14]. First we note that along any trajectory from no to n* we can simplify the third term in Eq. (14): J d . f(n) - g(n) In' d f(n) - g(n) tn f(n) + g(n) = no n f(n) + g(n)' (15) This term thus depends on the endpoints of the trajectory and not on the path, and therefore cannot contribute to the structure of the optimal path. In the analogy to mechanics, the first two terms are equivalent to the (Euclidean) action for a particle with position dependent mass in a potential; this means that along extremal trajectories there is a conserved energy E = ~ n2 1 [f(n) - g(n}F 2 f(n) + g(n) - 2 f(n) + g(n) . (16) At the endpoints of the trajectory we have n = a and f(n) = g(n) , and so we are looking for zero energy trajectories, along which n(t) = ±[f(n(t)) - g(n(t))] . (17) Substituting back into Eq. (14), and being careful about the signs, we find once again Eq's. (9,10). Both the 'transition state' and the optimal path method involve approximations, but if the noise is not too large the approximations are good and the results of the two methods agree. Yet another approach is to solve the master equation (2) directly, and again one gets the same answer for the switching rate when the noise is small, as expected since all the different approaches are all equivalent if we make consistent approximations. It is much more work to find the prefactors of the rates, but we are concerned here with orders of magnitude, and hence the prefactors aren't so important. 4 Interpretation The crucial thing to notice in this calculation is that the integrands in Eq's. (9,10) are bounded by one, so the activation energy (in units of the thermal energy kBT) is bounded by twice the change in the number of molecules. Translating back to the spontaneous switching rates, the result is that the noise driven switching time is longer than the relaxation time after switching by a factor that is bounded, spontaneous switching time (A ) .. < exp un , relaxatiOn tIme (18) where ,6.n is the change in the number of molecules required to go from one stable 'switched' state to the other. Imagine that we have a reaction scheme in which the difference between the two stable states corresponds to roughly 25 molecules. Then it is possible to have a Boltzmann factor of up to exp(25) rv 1010. Usually we think of this as a limit to stability: with 25 molecules we can have a Boltzmann factor of no more than rv 1010. But here I want to emphasize the positive statement that there exist kinetic schemes in which just 25 molecules would be sufficient to have this level of stability. This corresponds to years per millisecond: with twenty five molecules, a biochemical switch that can flip in milliseconds can be stable for years. Real chemical reaction schemes will not saturate this bound, but certainly such stability is possible with roughly 100 molecules. The genetic switch in A phage operates with roughly 100 copies of the repressor molecules, and even in this simple system there is extreme stability: the genetic switch is flipped spontaneously only once in 105 generations of the host bacterium [2]. Kinetic schemes with greater cooperativity get closer to the bound, achieving greater stability for the same number of molecules. In electronics, the construction of digital elements provides insulation against fluctuations on a microscopic scale and allows a separation between the logical and physical design of a large system. We see that, once a cell has access to several tens of molecules, it is possible to construct 'digital' switch elements with dynamics that are no longer significantly affected by microscopic fluctuations. Furthermore, weak interactions of these molecules with other cellular components cannot change the basic 'states' of the switch, although these interactions can couple state changes to other events. The importance of this 'digitization' on the scale of 10 -100 molecules is illustrated by different models for pattern formation in development. In the classical model due to Turing, patterns are expressed by spatial variations in the concentration of different molecules, and patterns arise because uniform concentrations are rendered unstable through the combination of nonlinearities in the kinetics with the different diffusion constants of different substances. In this picture, the spatial structure of the pattern is linked directly to physical properties of the molecules. An alternative that each spatial location is labelled by a set of discrete possible states, and patterns evolve out of the 'automaton' rules by which each location changes state in relation to the neighboring states. In this picture states and rules are more abstract, and the dynamics of pattern formation is really at a different level of description from the molecular dynamics of chemical reactions and diffusion. Reliable implementations of automaton rules apparently are accessible as soon as the relevant chemical reactions involve a few dozen molecules. Biochemical switches have been reconstituted in vitro, but I am not aware of any attempts to verify that stable switching is possible with small numbers of molecules. It would be most interesting to study model systems in which one could confine and monitor sufficiently few molecules that it becomes possible to observe spontaneous switching, that is the breakdown of stability. Although genetic switches have certain advantages, even the simplest systems would require full enzymatic apparatus for gene expression (but see Ref. [16] for recent progress on controllable in vitro expression systems).2 Kinase switches are much simpler, since they can be constructed from just a few proteins and can be triggered by calcium; caged calcium allows for an optical pulse to serve as input. At reasonable protein concentrations, 10 - 100 molecules are found in a volume of roughly 1 (J.tm) 3 . Thus it should be possible to fabricate an array of 'cells' with linear dimensions ranging from 100 nm to 10 J.tm, such that solutions of kinase and accessory proteins would switch stably in the larger cells but exhibit instability and spontaneous switching in the smaller cells. The state of the switch could be read out by including marker proteins that would serve as substrates of the kinase but have, for example, fluorescence lines that are shifted by phosphorylation, or by having fluorescent probes on the kinase itself; transitions of single enzyme molecules should be observable [15]. A related idea would be to construct vesicles containing ligand gate ion channels which can conduct calcium, and then have inside the vesicle enzymes for synthesis and degradation of the ligand which are calcium sensitive. The cGMP channels of rod photoreceptors are an example, and in rods the cyclase synthesizing cGMP is calcium sensitive, but the sign is wrong to make a switch [17]; presumably this could solved by appropriate mixing and matching of protein components from different cells. In such a vesicle the different stable states would be distinguished by different 2Note also that reactions involving polymer synthesis (mRNA from DNA or protein from mRNA) are not 'elementary' reactions in the sense described by Eq. (2). Synthesis of a single mRN A molecule involves thousands of steps, each of which occurs (conditionally) at constant probability per unit time, and so the noise in the overall synthesis reaction is very different. If the synthesis enzymes are highly processive, so that the polymerization apparatus incoporates many monomers into the polymer before 'backing up' or falling off the template, then synthesis itself involves a delay but relatively little noise; the dominant source of noise becomes the assembly and disassembly of the polymerization complex. Thus there is some subtlety in trying to relate a simple model to the complex sequence of reactions involved in gene expression. On the other hand a detailed simulation is problematic, since there are so many different elementary steps with unknown rates. This combination of circumstances would make experiments on a minimal, in vitro genetic switch espcially interesting. levels of internal calcium (as with adaptation states in the rod), and these could be read out optically using calcium indicators; caged calcium would again provide an optical input to flip the switch. Amusingly, a close packed array of such vesicles with rv 100 nm dimension would provide an optically addressable and writable memory with storage density comparable to current RAM, albeit with much slower switching. In summary, it should be possible to build stable biochemical switches from a few tens of molecules, and it seems likely that nature makes use of these. To test our understanding of stability we have to construct systems which cross the threshold for observable instabilities, and this seems accessible experimentally in several systems. Acknowledgments Thanks to M. Dykman, J. J. Hopfield, and A. J. Libchaber for helpful discussions. References 1. J. M. W. Slack, Fmm Egg to Embryo: Determinative Events in Early Development (Cambridge University Press, Cambridge, 1983); P. A. Lawrence, The Making of a Fly: The Genetics of Animal Design (Blackwell Science, Oxford, 1992). 2. M. Ptashne, A Genetic Switch: Phage ..\ and Higher Organisms, 2nd Edition (Blackwell, Cambridge MA, 1992); A. D. Johnson, A. R. Poteete, G. Lauer, R. T. Sauer, G. K. Ackers, and M. Ptashne, Nature 294, 217-223 (1981). 3. A. W. Murray, Nature 359, 599-604 (1992). 4. S. G. Miller and M. B. Kennedy, Cell 44, 861- 870 (1986); M. B. Kennedy, Ann. Rev. Biochem. 63, 571- 600 (1994). 5. E. Schrodinger, What is Life? (Cambridge University Press, Cambridge, 1944). 6. H. H. McAdams and A. Arkin, Ann. Rev. Biophys. Biomol. Struct. 27, 199-224 (1998); U. S. Bhalla and R. Iyengar, Science 283,381-387 (1999). 7. J. E. Lisman, Pmc. Nat. Acad. Sci. (USA) 82, 3055- 3057 (1985). 8. J. E. Lisman and M. A. Goldring, Pmc. Nat. Acad. Sci. (USA) 85, 5320- 5324 (1988). 9. N. Barkai and S. Leibler, Nature 387, 913-917 (1997). 10. J. Zinn-Justin, Quantum Field Theory and Critical Phenomena (Clarendon Press, Oxford, 1989). 11. H. H. McAdams and A. Arkin, Trends Genet. 15,65- 69 (1999). 12. F. Baras, M. Malek Mansour and J. E. Pearson, J. Chem. Phys. 105, 8257- 8261 (1996). 13. M. 1. Dykman, E. Mori, J. Ross, and P. M. Hunt, J. Chem. Phys. 100, 5735-5750 (1994). 14. S. Coleman, Aspects of Symmetry (Cambridge University Press, Cambridge, 1975). 15. H. P. Lu, L. Xun, and X. S. Xie, Science 282, 1877- 1882 (1998); T. Ha, A. Y. Ting, J. Liang, W. B. Caldwell, A. A. Deniz, D. S. Chemla, P. G. Schultz, and S. Weiss, Pmc. Nat. Acad. Sci. (USA) 96, 893- 898 (1999). 16. G. V. Shivashankar, S. Liu & A. J. Libchaber, Appl. Phys. Lett. 76, 3638-3640 (2000). 17. F. Rieke and D. A. Baylor, Revs. Mod. Phys. 70, 1027-1036 (1998).
2000
72
1,875
Computing with Finite and Infinite Networks Ole Winther* Theoretical Physics, Lund University SOlvegatan 14 A, S-223 62 Lund, Sweden winthe r@nimis.thep.lu. s e Abstract Using statistical mechanics results, I calculate learning curves (average generalization error) for Gaussian processes (GPs) and Bayesian neural networks (NNs) used for regression. Applying the results to learning a teacher defined by a two-layer network, I can directly compare GP and Bayesian NN learning. I find that a GP in general requires CJ (d S )-training examples to learn input features of order s (d is the input dimension), whereas a NN can learn the task with order the number of adjustable weights training examples. Since a GP can be considered as an infinite NN, the results show that even in the Bayesian approach, it is important to limit the complexity of the learning machine. The theoretical findings are confirmed in simulations with analytical GP learning and a NN mean field algorithm. 1 Introduction Non-parametric kernel methods such as Gaussian Processes (GPs) and Support Vector Machines (SVMs) are closely related to neural networks (NNs). These may be considered as single layer networks in a possible infinite dimensional feature space. Both the Bayesian GP approach and SVMs regularize the learning problem so that only a finite number of the features (dependent on the amount of data) is used. Neal [1] has shown that Bayesian NNs converge to GPs in the limit of infinite number of hidden units and furthermore argued that (1) there is no reason to believe that real-world problem should require only a 'small' number of hidden units and (2) there are in the Bayesian approach no reasons (besides computational) to limit the size of the network. Williams [2] has derived kernels allowing for efficient computation with both infinite feedforward and radial basis networks. In this paper, I show that learning with a finite rather than infinite networks can make a profound difference by studying the case where the task to be learned is defined by a large but finite two-layer NN. A theoretical analysis of the Bayesian approach to learning this task shows that the Bayesian student makes a learning transition from a linear model to specialized non-linear one when the number of examples is of the order of the number of adjustable weights in the network. This effect-which is also seen in the simulations-is a consequence of the finite complexity of the network. In an infinite network, i.e. a GP on the *http : //www. the p . lu .se/t f 2/ s t aff/winthe r / other hand such a transition will not occur. It will eventually learn the task but it requires CJ( dS )-training examples to learn features of order s, where d is the input dimension. Here, I focus entirely on regression. However, the basic conclusions regarding learning with kernel methods and NNs turn out to be valid more generally, e.g. for classification unpublished results and [3]. I consider the usual Bayesian setup of supervised learning: A training set DN = {(Xi, y;) Ii = 1 ... , N} (x E Rd and y E R) is known and the output for the new input x is predicted by the function f(x) which is sampled from the prior distribution of model outputs. I will consider both a Gaussian process prior and the prior implied by a large (but finite) two-layer network. The output noise is taken to be Gaussian, so the Likelihood becomes p(ylf(x)) = e - (Y- J(X))2 /2 /V27r(T2. The error measure is minus the log-Likelihood and Bayes regressor (which minimizes the expected error) is the posterior mean prediction (f(x)) - Ef f(x) 0; p(Yi If(Xi)) EfO; p(y;lf(x;)) , (1) where I have introduced Ef, f = f(Xl) " '" f(XN) , f(x), to denote an average with respect to the model output prior. Gaussian processes. In this case, the model output prior is by definition Gaussian (2) where C is the covariance matrix. The covariance matrix is computed from the kernel (covariance function) C(x, x'). Below I give an explicit example corresponding to an infinite two-layer network. Bayesian neural networks The output of the two-layer NN is given by f(x, w , W) = JK ~:: Wk<f>(Wk . x), where an especially convenient choice of transfer function in what follows is <f>( z) = I~ dte- t2 /2/ V2ii. I consider a Bayesian framework (with fixed known hyperparameters) with a weight prior that factorizes over hidden units p(w, W) = Ok [P(Wk )p(Wk)] and Gaussian input-to-hidden weights Wk ~ N(O, ~). From Bayesian NNs to GPs. The prior over outputs for the Bayesian neural network is p(f) = I dwdWp(w, W) 0; J(J(x;) - f(x;, w , W)). In the infinite hidden unit limit, J{ -+ 00, when P(Wk) has zero mean and finite, say unit variance, it follows from the central limit theorem (eLT) that the prior distribution converges to a Gaussian process f ~ N(O, C) with kernel [1,2] C(x, x') J dw p(w) <f>(w . x) <f>(w . x') ~ arcsin (J(l + xT;:~:'+ XIT~XI)) (3) The rest of the paper deals with theoretical statistical mechanics analysis and simulations for GPs and Bayesian NNs learning tasks defined by either a NN or a GP. For the simulations, I use analytical GP learning (scaling like CJ (N 3 )) [4] and a TAP mean field algorithm for Bayesian NN. 2 Statistical mechanics of learning The aim of the average case statistical mechanics analysis is to derive learning curves, i.e. the expected generalization error as a function of the number of training examples. The generalization error of the Bayes regressor (f (x)) eq. (1) is fg = (((y - (f(X)))2)) , (4) where double brackets (( ... )) = I IIi [dx;dYip(Xi, Yi)] .. . denote an average over both training examples and the test example (x, y). Rather than using eq. (4) directly, fg will-as usually done-be derived from the average of the free energy -( (In Z)), where the partition function is given by Z = Ef 1 N exp (-~ 2:)Yi - f(Xi))2) . V27ru 2 2u i (5) I will not give many details of the actual calculations here since it is beyond the scope of the paper, but only outline some of the basic assumptions. 2.1 Gaussian processes The calculation for Gaussian processes is given in another NIPS contribution [5]. The basic assumption made is that Y- f(x) becomes Gaussian with zero mean 1 under an average over the training example Y - f(x) ~ N(O , (((y - f(x)) 2))). This assumption can be justified by the CLT when f(x) is a sum of many random parts contributing on the same scale. Corrections to the Gaussian assumption may also be calculated [5]. The free energy may be written in term of a set of order parameters which is found by saddlepoint integration. Assuming that the teacher is noisy y = f. (x) + 1], (( 1]2)) = uZ, the generalization error is given by the following equation which depends upon an orderparameter v v = uZ + ((f;(x))) - Ov(v2Ef((f(x)f.(x)))2) 1 + A20v Ef((J2(X)))/N N (6) (7) where the new normalized measure Ef . . . ex Ef exp (-v((J2(x)))/2) ... has been introduced. Kernels in feature space. By performing a Karhunen-Loeve expansion, f(x) can be written as a linear perceptron with weights w p in a possible infinite feature space f(x) = LWpAcPp(x) , (8) p where the features cP p (x) are orthonormal eigenvectors of the covariance function with eigenvalues Ap: I dxp(x) C(x', X)cP p(x) = ApcPp(X') and I dx p(X) cPpl (x)cPp (x) = Jppl. The teacher f. (x) may also be expanded in terms of the the features: f.(x) = L apAcPp(x) , p Using the orthonormality the averages may be found: ((J2(x))) = I:p ApW ~ , ((f(x)f. (x))) = I:p Apwpap and ((f;(x))) = I:p Apa ~ . For a Gaussian process prior, lGeneralization to non-zero mean is straightforward. the prior over the weight is a spherical Gaussian w ~ N(O , I). Averaging over w, the saddlepoint equations can be written in tenns of the number of examples N, the noise levels 0"2 and 0";, the eigenvectors of the covariance function Ap and the teacher projections ap: v N 2 Apa~ 2 Ap ( ) ( ) -1 --;; 0"* + ~ (1 + VAp)2 0" + ~ (1 + VAp)2 N (0"2+ L Ap )-1 1 + VAp p (9) (10) These eqs. are valid for a fixed teacher. However, eq. (9) may also be averaged over the distribution of teachers. In the Bayes optimal scenario, the teacher is sampled from the same prior as the student and 0"2 = 0";. Thus ap ~ N(O, I) implying a~ = 1, where the average over the teacher is denoted by an overline. In this case the equations reduce to the Bayes optimal result first derived by Sollich [6]: f. g = f.~ayes = N / v. Learning finite nets. Next, I consider the case where the teacher is the two-layer network f*(x) = f(w, W) and the GP student uses the infinite net kernel eq. (3). The average over the teacher corresponds to an average over the weight prior and since f* (x)f* (Xl) = C(x, Xl), I get a~Ap = ! dxdxlp(x)p(xl)C(x, XI)¢p(X)¢p(XI) = Ap , (11) where the eigenvalue equation and the orthonormality have been used. The theory therefore predicts that a GP student (with the infinite network kernel) will have the same learning curve irre.~pectively of the number of hidden units of the NN teacher. This result is a direct consequence of the Gaussian assumption made for the average over examples. However, what is more surprising is that it is found to be a very good approximation in simulations down to K = 1, i.e. a simple perceptron with a sigmoid non-linearity. Inner product kernels. I specialize to inner product kernels C(x, Xl) = c(x . xl/d) and consider large input dimensionality d and input components which are iid with zero mean and unit variance. The eigenvectors are products of the input components ¢p(x) = OmEP Xm and are indexed by subsets of input indices, e.g. p = {I, 2, 42} [3]. The eigenvalues are Ap = c l;IIJ~) with degeneracy nlpl = ( I~I ) R:i dlpl / Ipl!, where Ipi is the cardinality (in the example above Ipl = 3). Plugging these results into eqs. (9) and (10), it follows that to learn features that are order s in the inputs, O( dS ) examples are needed. The same behavior has been predicted for learning in SVMs [3]. The infinite net eq. (3) reduces to an inner product covariance function for ~ = TI/ d (T controls the degree on non-linearity of the rule) and large d, X . X R:i d: ( I) ( 1/) 2 . (TX. Xl ) C x, X = ex· x d =;: arcsm d (1 + T) . (12) Figure 1 shows learning curves for GPs for the infinite network kernel. The mismatch between theory and simulations is expected to be due to 0(1/ d)-corrections to the eigenvalues Ap. The figure clearly shows that learning of the different order features takes place on different scales. The stars on the f.g-axis show the theoretical prediction of asymptotic errorfor N = O( d), O( d3 ), ... (the teacher is an odd function). 2.2 Bayesian neural networks The limit of large but finite NNs allows for efficient computation since the prior over functions can be approximated by a Gaussian. The hidden-to-output weights are for sim0.4 Eg 0.2 o Small N = O(d) 20 40 60 80 N 0.15 0.1 Eg 0.0 Large N = O(d3) 500 1000 N 1500 2000 Figure 1: Learning curve for Gaussian processes with the infinite network kernel (d = 10, T = 10 and (}2 = 0.01) for two scales of training examples. The full line is the the theoretical prediction for the Bayes optimal GP scenario. The two other curves (almost on top of each other as predicted by theory) are simulations for the Bayes optimal scenario (dotted line) and for GP learning a neural network with J{ = 30 hidden units (dash-dotted line). plicity set to one and we introduce the 'fields' hk(x) = Wk . x and write the output as f(x, w) = f(h(x)) = .Jx ~~ <I>(hk(X)), h(x) = h1(x), ... , hK(x). In the following, I discuss the TAP mean field algorithm used to find an approximation to the Bayes regressor and briefly the theoretical statistical mechanics analysis for the NN task. Mean field algorithm. The derivation sketched here is a straightforward generalization of previous results for neural networks [7]. The basic cavity assumption [7, 8] is that for large d, J{ and for a suitable input distribution, the predictive distribution p(J (x) I D N) is Gaussian: p(J(x)IDN) RJ N((J(x)), (J2(x)) - (J(x))2) . The predictive distribution for the fields h( x) is also assumed to be Gaussian p(h(x)IDN) RJ N((h(x)), V) , where V = (h(x)h(xf) - (h(x))(h(xf). Using these assumptions, I get an approximate Bayes regressor (13) To make predictions, we therefore need the two first moments of the weights since (hk(x)) = (Wk) . x and Vkl = ~mn XmXn((WmkWnl) - (Wmk)(Wnl)). We can simplify this in the large d limit by taking the inputs to by iid with zero mean and unit variance: Vkl RJ (Wk' WI) - (Wk) . (WI). This approximation can be avoided at a substantial computational cost [8]. Furthermore, (Wk' WI) turns out equal to the prior covariance <SkIT / d [7]. The following exact relation is obtained for the mean weights (14) where p(YiI DN\(Xi, Yi)) = J dh(Xi) p(Yi Ih(Xi)) p(h(Xi )IDN\(Xi' y;)) . E 0 . 05...,--~--~-~-~~---, 0.04 0.03 9 0.02 0.01 \. . .;.~~ ..... ::---~~~~:':~~~'-~-~-~.-~-~~~-~-= .-= .-= .-~~~-~-.----~. 2 4 6 8 N 10 dK Figure 2: . Learning curves for Bayesian NNs and GPs. The dashed line is simulations for the TAP mean field algorithm (d = 30, K = 5, T = 1 and 0- 2 = 0.01) learning a corresponding NN task, i.e. an approximation to the Bayes optimal scenario. The dashdotted line is the simulations for GPs learning the NN task. Virtually on top of that curve is the curve for Bayes optimal GP scenario (dotted line). The full lines are the theoretical prediction. Up to N = Nc = 2.51dK, the learning curves for Bayesian NNs and GPs coincide. At Ne , the statistical mechanics theory predicts a first order transition to a specialized solution for the NN Bayes optimal scenario (lower full line). p(y;lh(x;)) is the Likelihood and p(h(x;)IDN\(X;, y;)) is a predictive distribution for h(x;) for a training set where the ith example has been left out. In accordance with above, I assume p(h(x;) IDN\(Xi, y;)) ~ N((h(x;)hi, V). Finally, generalizing the relation found in Refs. [7,8], I can relate the reduced mean to the full posterior mean: (hk(x;)h; = (hk(x;)) - L VklD:li I to express everything in terms of (Wk) and D:k;, k = 1, ... , K and i = 1, ... , N . The mean field eqs. are solved by iteration in D:k; and (Wmk) following the recipe given in Ref. [8]. The algorithm is tested using a teacher sampled from the NN prior, i.e. the Bayes optimal scenario. Two types of solutions are found: a linear symmetric and a non-linear specialized. In the symmetric solution, (Wk) = (WI) and (Wk) . (Wk) = O(T/dK). This means that the machine is linear (when T « K). For N = O(dK), a transition to a specialized solution occurs, where each (Wk), k = 1, ... , K, aligns to a distinct weight vector of the teacher and (Wk) . (Wk) = O(T/d). The Bayesian student thus learns the linear features for N = 0 (d). However, unlike the GP, it learns all of the remaining nonlinear features for N = O(dK). The resulting empirical learning curve averaged over 25 independent runs is shown in figure 2. It turned out that setting (hk(xdhi = (hk(x;)) was a necessary heuristic in order to find the specialized solution. The transition to the specialized solution-although very abrupt for the individual run-is smeared out because it occurs at different N for each run. The theoreticalleaming curve is also shown in figure 2. It has been derived by generalizing the results of Ref. [9] for the Gibbs algorithm to the Bayes optimal scenario. The picture that emerges is in accordance with the empirical findings. The transition to the specialized solution is predicted to be first order, i.e. with a discontinuous jump in the relevant order parameters at the number of examples Nc ( 0-2 , T ), where the specialized solution becomes the physical solution (i.e. the lowest free energy solution). The mean field algorithm cannot completely reproduce the theoretical predictions because the solution gets trapped in the meta-stable symmetric solution. This is often observed for first order transitions and should also be observable in the Monte Carlo approach to Bayesian NNs [1]. 3 Discussion Learning a finite two-layer regression NN using (1) the Bayes optimal algorithm and (2) the Bayes optimal algorithm for an infinite network (implemented by a GP) is compared. It is found that the Bayes optimal algorithm can have a very superior performance. This can be explained as an entropic effect: The infinite network will-although the correct finite network solution is included a priori- have a vanishing probability of finding this solution. The finite network on the other hand is much more constraint wrt the functions it implements. It can thus--even in the Bayesian setting-give a great payoff to limit complexity. For d-dimensional inner product kernel with iid input distribution, it is found that it in general requires 0 (d S ) training examples to learn features of 0 (s). Unpublished results and [3] show that these conclusions remain true also for SVM and GP classification. For SVM hand-written digit recognition, fourth order kernels give good results in practise. Since N = 0(104 ) 0(10 5 ), it can be concluded that the 'effective' dimension, deffective = 0(10) against typically d = 400, i.e. some inputs must be very correlated and/or carry very little information. It could therefore be interesting to develop methods to measure the effective dimension and to extract the important lower dimensional features rather than performing the classification directly from the images. Acknowledgments I am thankful to Manfred Opper for valuable discussions and for sharing his results with me and to Klaus-Robert Muller for discussions at NIPS. This research is supported by the Swedish Foundation for Strategic Research. References [1] R. Neal, Bayesian Learningfor Neural Networks, Lecture Notes in Statistics, Springer (1996). [2] C. K. I. Williams, Computing with Infinite Networks, in Neural Information Processing Systems 9, Eds. M. C. Mozer, M. I. Jordan and T. Petsche, 295-301, MIT Press (1997). [3] R. Dietrich, M. Opper and H. Sompolinsky, Statistical Mechanics of Support Vector Machines, Phys. Rev. Lett. 82, 2975-2978 (1999). [4] C. K. I. Williams and C. E. Rasmussen, Gaussian Processes for Regression, In Advances in Neural Information Processing Systems 8 (NIPS'95). Eds. D. S. Touretzky, M. C. Mozer and M. E. Hasselmo, 514-520, MIT Press (1996). [5] D. Malzahn and M. Opper, In this volume. [6] P. Sollich, Learning Curves for Gaussian Processes, In Advances in Neural Information Processing Systems 11 (NIPS'98), Eds. M. S. Keams, S. A. Solla, and D. A. Cohn, 344-350, MIT Press (1999). [7] M. Opper and O. Winther, Mean Field Approach to Bayes Learning in Feed-Forward Neural Networks, Phys. Rev. Lett. 76,1964-1967 (1996). [8] M. Opper and O. Winther, Gaussian Processes for Classification: Mean Field Algorithms, Neural Computation 12,2655-2684 (2000). [9] M. Ahr, M. Biehl and R. Urbanczik, Statistical physics and practical training of soft-committee machines Eur. Phys. J. B 10,583 (1999).
2000
73
1,876
Hierarchical Memory-Based Reinforcement Learning Natalia Hernandez-Gardio} Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 nhg@ai.mit.edu Sridhar Mahadevan Department of Computer Science Michigan State University East Lansing, MI 48824 mahadeva@cse.msu.edu Abstract A key challenge for reinforcement learning is scaling up to large partially observable domains. In this paper, we show how a hierarchy of behaviors can be used to create and select among variable length short-term memories appropriate for a task. At higher levels in the hierarchy, the agent abstracts over lower-level details and looks back over a variable number of high-level decisions in time. We formalize this idea in a framework called Hierarchical Suffix Memory (HSM). HSM uses a memory-based SMDP learning method to rapidly propagate delayed reward across long decision sequences. We describe a detailed experimental study comparing memory vs. hierarchy using the HSM framework on a realistic corridor navigation task. 1 Introduction Reinforcement learning encompasses a class of machine learning problems in which an agent learns from experience as it interacts with its environment. One fundamental challenge faced by reinforcement learning agents in real-world problems is that the state space can be very large, and consequently there may be a long delay before reward is received. Previous work has addressed this issue by breaking down a large task into a hierarchy of subtasks or abstract behaviors [1, 3, 5]. Another difficult issue is the problem of perceptual aliasing: different real-world states can often generate the same observations. One strategy to deal with perceptual aliasing is to add memory about past percepts. Short-term memory consisting of a linear (or tree-based) sequence of primitive actions and observations has been shown to be a useful strategy [2]. However, considering short-term memory at a flat, uniform resolution of primitive actions would likely scale poorly to tasks with long decision sequences. Thus, just as spatio-temporal abstraction of the state space improves scaling in completely observable environments, for large partially observable environments a similar benefit may result if we consider the space of past experience at variable resolution. Given a task, we want a hierarchical strategy for rapidly bringing to bear past experience that is appropriate to the grain-size of the decisions being considered. comer T-junction dead end Ii II =::J C II Ii II o D3 _0 ..01 _ O D3 _ O D2 _ 0 D1 _ O D3 / abstraction level: navigation -0- ' ", ">. * '---.... ~ Y Jabstraction level: traversal - o~~ocl! ... O~ , '---v--:J ~ * " abstraction level: primitive i . . o .. 0 .. 0 .. .. ~ g .. 0 .. 0 .. ... ~ i 0 .. 0 .. 0 .. .. ~ Figure 1: This figure illustrates memory-based decision making at two levels in the hierarchy of a navigation task. At each level, each decision point (shown with a star) examines its past experience to find states with similar history (shown with shadows). At the abstract (navigation) level, observations and decisions occur at intersections. At the lower (corridor-traversal) level, observations and decisions occur within the corridor. In this paper, we show that considering past experience at a variable, taskappropriate resolution can speed up learning and greatly improve performance under perceptual aliasing. The resulting approach, which we call Hierarchical Suffix Memory (HSM), is a general technique for solving large, perceptually aliased tasks. 2 Hierarchical Suffix Memory By employing short-term memory over abstract decisions, each of which involves a hierarchy of behaviors, we can apply memory at a more informative level of abstraction. An important side-effect is that the agent can look at a decision point many steps back in time while ignoring the exact sequence of low-level observations and actions that transpired. Figure 1 illustrates the HSM framework. The problem of learning under perceptual aliasing can be viewed as discovering an informative sequence of past actions and observations (that is, a history suffix) for a given world state that enables an agent to act optimally in the world. We can think of each situation in which an agent must choose an action (a choice point) as being labeled with a pair [0", l]: l refers to the abstraction level and 0" refers to the history suffix. In the completely observable case, 0" has a length of one, and decisions are made based on the current observation. In the partially observable case, we must additionally consider past history when making decisions. In this case, the suffix 0", is some sequence of past observations and actions that must be learned. This idea of representing memory as a variable-length suffix derives from work on learning approximations of probabilistic suffix automata [2, 4]. Here is the general HSM procedure (including model-free and model-based updates): 1. Given an abstraction levell and choice point s within l: for each potential future decision, d, examine the history at level l to find a set of past choice points that have executed d and whose incoming (suffix) history most closely matches that of the current point. Call this set of instances the "voting set" for decision d. 2. Choose dt as the decision with the highest average discounted sum of reward over the voting set. Occasionally, choose dt using an exploration strategy. Here, t is the event counter of the current choice point at level l. 3. Execute the decision dt and record: 0t, the resulting observation; Tt, the reward received; and nt, the duration of abstract action dt (measured by the number of primitive environment transitions executed by the abstract action). Note that for every environment transition from state Si-l to state Si with reward Ti and discount I, we accumulate any reward and update the discount factor: Tt ~ Tt + ItTi It ~ lIt 4. Update the Q-value for the current decision point and for each instance in the voting set using the decision, reward, and duration values recorded along with the instance. Model-free: use an SMDP Q-Iearning update rule ((3 is the learning rate): QI(St, dt) ~ (1- (3)QI(St, dt) + (3h + It max QI(St+n" d)) d Model-based: if a state-transition model is being used, a sweep of value iteration can be executed1 . Let the state corresponding to the decision point at time t be represented by the suffix s: QI(s,dt) ~ RI(S,dt) + 2:l1(SI I s,dt)"Vi(S')(,Ndt ) s' where RI(S, dt) is the estimated immediate reward from executing decision dt from the choice point [s, l]; FI(S' I s, dt ) is the estimated probability that the agent arrives in [s',l] given that it executed dt from [s,l]; Vt(S') is the utility of the situation [S', l]; and Ndt is the average duration of the transition [s,l] to [s',l] under abstract action dt. HSM requires a technique for short-term memory. We implemented the Nearest Sequence Memory (NSM) and Utile Suffix Memory (USM) algorithms proposed by McCallum [2]. NSM records each of its raw experiences as a linear chain. To choose the next action, the agent evaluates the outcomes of the k "nearest" neighbors in the experience chain. NSM evaluates the closeness between two states according to the match length of the suffix chain preceding the states. The chain can either be grown indefinitely, or old experiences can be replaced after the chain reaches a maximum length. With NSM, a model-free learning method, HSM uses an SMDP Q-Iearning rule as described above. USM also records experience in a linear time chain. However, instead of attempting to choose actions based on a greedy history match, USM tries to explicitly determine how much memory is useful for predicting reward. To do this, the agent builds a tree-like structure for state representation online, selectively adding depth to the tree if the additional history distinction helps to predict reward. With USM, which learns a model, HSM updates the Q-values by doing one sweep of value iteration with the leaves of the tree as states. Finally, to implement the hierarchy of behaviors, in principle any hierarchical reinforcement learning method may be used. For our implementation, we used the Hierarchy of Abstract Machines (HAM) framework proposed by Parr and Russell [3]. When executed, an abstract machine executes a partial policy and returns control to the caller upon termination. The HAM architecture uses a Q-Iearning rule modified for SMDPs. lIn this context, "state" is represented by the history suffix. That is, an instance is in a "state" if the instance's incoming history matches the suffix representing the state. In this case, the voting set is exactly the set of instances in the same state as the current choice point 8t Ihl'l-~ bJlJn~"': LL (-w"'r'lR4~.-O I~U;4, c:n , (Jl( ..... 'JWM75, . 00002169l itctuat PO,ltlon::·=·,·,-.,.,-::<lI"Y=-;).'.JO!'iSS=I •... <lT=(1(Q3 Encoder po"lt.lr~: :{ ="OOJo<;~" '5 Y=-l'."u oj l,7 5=,0';01'1 T=O(o'€ Co1'lp(lssl.,<,1,,,:>:r,:3::' P,·"" ;UUtl." ,, ... J;::I (: Unit~ : C(JOI"·d ulJt ,...'" 0.1 ' riehl',,; iJ,.,1 n" 0. 1 d~. ,·~~" J Shor1 SeilS Homad(l) Figure 2: The corridor environment in the Nomad 200 robot simulator. The goal is the 4-way junction. The robot is shown at the middle T-junction. The robot is equipped with 16 short-range infrared and long-range sonar sensors. The other figures in the environment are obstacles around which the robot must maneuver. 3 The Navigation Task To test the HSM framework, we devised a navigation task in a simulated corridor environment (see Figure 2). The task is for the robot to find its way from the start, the center T-junction, to the goal, the four-way junction. The robot receives a reward at the goal intersection and and a small negative reward for each primitive step taken. Our primary testbed was a simulated agent using a Nomad 200 robot simulator. This simulated robot is equipped with 20 bumper and 16 sonar and infrared sensors, arranged radially. The dynamics of the simulator are not "grid world" dynamics: the Nomad 200 simulator represents continuous, noisy sensor input and the occasional unreliability of actuators. The environment presents significant perceptual ambiguity. Additionally, sensor readings can be noisy; even if the agent is at the goal or an intersection, it might not "see" it. Note the size of the robot relative to the environment in Figure 2. What makes the task difficult are the several activities that must be executed concurrently. Conceptually, there are two levels to our navigation problem. At the top, most abstract, level is the root task of navigating to the goal. At the lower level is the task of physically traversing the corridors, avoiding obstacles, maintaining alignment with the walls, etc. 4 Implementation of the Learning Agents In our experiments, we compared several learning agents: a basic HAM agent, four agents using HSM (each using a different short-term memory technique), and a "flat" NSM agent. To build a set of behaviors for hallway navigation, we used a three-level hierarchy. The top abstract level is basically a choice state for choosing a hallway navigation direction (see Figure 3a). In each of the four nominal directions (front, back, left, right), the agent can make one of three observations: wall, open, or unknown. The agent must learn to choose among the four abstract machines to reach the next go orwar (a) Figure 3: Hierarchical structure of behaviors for hallway navigation. Figure (a) shows the most abstract level - responsible for navigating in the environment. Figures (b) and (c) show two implementations of the hall-traversal machines. The machine in Figure (b) is reactive, and Figure (c) is a machine with a choice point. intersection. This top level machine has control initially, and it regains control at intersections. The second level of the hierarchy contains the machines for traversing the hallway. The traversal behavior is shown in Figure 3b. Each of the four machines at this level executes a reactive strategy for traversing a corridor. Finally, the third level of the hierarchy implements the follow-wall and avoid-obstacle strategies using primitive actions. Both the avoid-obstacle and the follow-wall strategies were themselves trained previously using Q-Iearning to exploit the power of reuse in the hierarchical framework. The HAM agent uses a three-level behavior hierarchy as described above. There is a single choice state, at the top level, and the agent learns to coordinate its choices by keeping a table of Q-values. The Q-value table is indexed by the current percepts and the chosen action (one of four abstract machines). The HAM agent uses a discount of 0.9, and a learning rate of 0.1. Exploration is done with a simple epsilon-greedy strategy. The first pair of HSM agents use the same behavior hierarchy as the HAM agent. However, they use short-term memory at the most abstract level to learn a strategy for navigating the corridor. The first of these agents uses NSM at the top level with a history length of 1000, k = 4, a discount of 0.9, and a learning rate of 0.1. The second agent uses USM at the top level with a discount of 0.95. The performance of these top-level memory agents was studied as a control against the more complex multi-level memory agents described next. The next pair of HSM agents use short-term memory both at the abstract navigation level and at the intermediate level. The behavior decomposition at the abstract navigation level is the same for the previous agents; however, the traversal behavior is in turn composed of machines that must make a decision based on short-term memory. Each of the machines at the traversal level uses short-term memory to learn to coordinate a strategy behaviors for traversing a corridor. The memorybased version of the traversal machine is shown in Figure 3c. The first of these agents uses NSM as the short-term memory technique at both levels of the hierarchy. It uses a history length of 1000, k = 4, a discount of 0.9, and a learning rate of 0.1. The second agent uses USM as the short-term memory technique at the top level with a discount of 0.95. At the intermediate level, it uses NSM with the same learning parameters as the preceding agent. Exploration is done with a simple epsilon-greedy strategy in all cases. Finally, we study the behavior of a "flat" NSM agent. The flat agent must keep track of the following perceptual data: first, it needs the same perceptual information as the top-level HAM (so it can identify the goal); second, it needs the additional perceptual data for aligning to walls and for avoiding obstacles: whether it was bumped, and the angle to the wall (binned into 4 groups of 45° each). The flat agent chooses among four primitive actions: go-forward, veer-left, veer-right, and back-up. Not only must it learn to make it to the goal, it must simultaneously learn to align itself to walls and avoid obstacles. The NSM agent uses a history length of 1000 , k = 4, a discount of 0.9, and a learning rate of 0.1. Exploration is done with a simple epsilon-greedy strategy. 5 Experimental Results In Figure 4, we see the learning performance of each agent in the navigation task. The graphs show the performance advantage of both multi-level HSM agents over the other agents. In particular, we find that the flat memory-based agent does considerably worse than the other three, as expected. The flat agent must carry around the perceptual data to perform both high and low-level behaviors. From the point of view of navigation, this results in long strings of uninformative corridor states between the more informative intersection states. Since takes such an agent longer to discover patterns in its experience, it never quite learns to navigate successfully to the goal. Next, both multi-level memory-based hierarchical agents outperform the HAM agent. The HAM agent does better at navigation than the flat agent since it abstracts away the perceptually aliased corridor states. However, it is unable to distinguish between all of the intersections. Without the ability to tell which Tjunctions lead to the goal, and which to a dead end, the HAM agent does not perform as well. The multi-level HSM agents also outperform the single-level ones. The multi-level agents can tune their traversing strategy to the characteristics of the cluttered hallway by using short-term memory at the intermediate level. Finally, although it initially does worse, the multi-level HSM agent with USM soon outperforms the multi-level HSM agent with NSM. This is because the USM algorithm forces the agent to learn a state representation that uses only as much incoming history as needed to predict reward. That is, it tries to learn the right history suffix for each situation rather approximating the suffix by simply matching greedily on incoming history. Learning such a representation takes some time, but, once learned, produces better performance. 6 Conclusions and Future Work In this paper we described a framework for solving large perceptually aliased tasks called Hierarchical Suffix Memory (HSM). This approach uses a hierarchical behavioral structure to index into past memory at multiple levels of resolution. Organizing past experience hierarchically scales better to problems with long decision sequences. We presented an experiment comparing six different learning methods, showing that hierarchical short-term memory produces overall the best performance "as ~ ~ (!) B ~ f-a .ll § z 0.0012 multi-level memory (USM+HAM) multi-level memory (NSM+HAM) 0.001 no memory ~HAM ~ flat memory NSM --o.ooos 0.0006 0.0004 0.0002 -.------.~-'.-----10000 20000 30000 40000 Number of Pnmltlve Steps 0.0012 "as 0.001 ~ ~ o.ooos (!) B ~ 0.0006 f-a 0.0004 .ll § z 0.0002 0 0 10000 multi-level memory (USM+HAM) multi-level memory (NSM+HAM) top-level only memory (USM+HAM) top-level only memory (NSM+HAM) ---..... ,. 20000 30000 40000 Number of Primitive Steps Figure 4: Learning performance in the navigation task. Each curve is averaged over eight trials for each agent. in a perceptually aliased corridor navigation task. One key limitation of the current HSM framework is that each abstraction level examines only the history at its own level. Allowing interaction between the memory streams at each level of the hierarchy would be beneficial. Consider a navigation task in which the decision at a given intersection depends on an observation seen while traversing the corridor. In this case, the abstract level should have the ability to "zoom in" to inspect a particular low-level experience in greater detail. We expect that pursuit of general frameworks such as HSM to manage past experience at variable granularity will lead to strategies for control that are able to gracefully scale to large, partially observable problems. Acknowledgements This research was carried out while the first author was at the Department of Computer Science and Engineering, Michigan State University. This research is supported in part by a KDI grant from the National Science Foundation ECS9873531. References [1] Thomas G. Dietterich. The MAXQ method for hierarchical reinforcement learning. In Autonomous Robots Journal, Special Issue on Learning in Autonomous Robots, 1998. [2] Andrew K. McCallum. Reinforcement Learning with Selective Perception and Hidden State. PhD thesis, University of Rochester, 1995. [3] Ron Parr. Hierarchical Control and Learning for Markov Decision Processes. PhD thesis, University of California at Berkeley, 1998. [4] Dana Ron, Yoram Singer, and Naftali Tishby. The power of amnesia: Learning probabilistic automata with variable mem ory length. Machine Learning, 25:117- 149, 1996. [5] R. Sutton, D. Precup, and S. Singh. Intra-option learning about temporally abstract actions. In Proceedings of the 15th International Conference on Machine Learning, pages 556- 564, 1998.
2000
74
1,877
Second order approximations for probability models Hilbert J. Kappen Department of Biophysics Nijmegen University Nijmegen, the Netherlands bert@mbfys.kun.nl Abstract Wim Wiegerinck Department of Biophysics Nijmegen University Nijmegen, the Netherlands wimw@mbfys.kun.nl In this paper, we derive a second order mean field theory for directed graphical probability models. By using an information theoretic argument it is shown how this can be done in the absense of a partition function. This method is a direct generalisation of the well-known TAP approximation for Boltzmann Machines. In a numerical example, it is shown that the method greatly improves the first order mean field approximation. For a restricted class of graphical models, so-called single overlap graphs, the second order method has comparable complexity to the first order method. For sigmoid belief networks, the method is shown to be particularly fast and effective. 1 Introduction Recently, a number of authors have proposed deterministic methods for approximate inference in large graphical models. The simplest approach gives a lower bound on the probability of a subset of variables using Jenssen's inequality (Saul et aI., 1996). The method involves the minimization of the KL divergence between the target probability distribution p and some 'simple' variational distribution q. The method can be applied to a large class of probability models, such as sigmoid belief networks, DAGs and Boltzmann Machines (BM). For Boltzmann-Gibbs distributions, it is possible to derive the lower bound as the first term in a Taylor series expansion of the free energy around a factorized model. The free energy is given by -log Z, where Z is the normalization constant of the Boltzmann-Gibbs distribution: p(x) = exp(~E(X)). This Taylor series can be continued and the second order term is known as the TAP correction (Plefka, 1982; Kappen and Rodriguez, 1998). The second order term significantly improves the quality of the approximation, but is no longer a bound. For probability distributions that are not Boltzmann-Gibbs distributions, it is not obvious how to obtain the second order approximation. However, there is an alternative way to compute the higher order corrections, based on an information theoretic argument. Recently, this argument was applied to stochastic neural networks with asymmetric connectivity (Kappen and Spanjers, 1999). Here, we apply this idea to directed graphical models. 2 The method Let x = (Xl"", Xn) be an n-dimensional vector, with Xi taking on discrete values. Let p(x) be a directed graphical model on x. We will assume that p(x) can be written as a product of potentials in the following way: n n (I) k=l k=l Here, Pk(Xkl'lrk) denotes the conditional probability table of variable Xk given the values of its parents 'Irk. xk = (X k, 'Irk) denotes the subset of variables that appear in potential k and ¢k (xk) = logpk(xk l'lrk). Potentials can be overlapping, xk n xl =1= 0, and X = Ukxk. We wish to compute the marginal probability that Xi has some specific value Si in the presence of some evidence. We therefore denote X = (e, s) where e denote the subset of variables that constitute the evidence, and S denotes the remainder of the variables. The marginal is given as ( _I ) p(si,e) P s. e p(e) . (2) Both numerator and denominator contain sums over hidden states. These sums scale exponentially with the size of the problem, and therefore the computation of marginals is intractable. We propose to approximate this problem by using a mean field approach. Consider a factorized distribution on the hidden variables h: (3) We wish to find the factorized distribution q that best approximates p(sle). Consider as a distance measure KL = LP(sle) log (p~~!;)) . • (4) It is easy to see that the q that minimizes KL satisfies: (5) We now think of the manifold of all probability distributions of the form Eq. 1, spanned by the coordinates ¢k (xk), k = 1, ... , m. For each k, ¢k (xk) is a table of numbers, indexed by xk. This manifold contains a submanifold of factorized probability distributions in which the potentials factorize: ¢k(Xk) = Li,iEk ¢ki(Xi). When in addition, Lk,iEk ¢ki(Xi) = logqi(xi), i E h, p(sle) reduces to q(s). Assume now that p(sle) is somehow close to the factorized submanifold. The difference ~P(Si Ie) = P(Si Ie) - qi(Si) is then small, and we can expand this small difference in terms of changes in the parameters ~¢k(xk) = ¢k(Xk) -logq(xk), k = 1, ... , m: t L (810gp(s~le)) ~¢k(xk) k=l a;k 8¢k(x) q 1 " " ( 82 10gp(sile) ) -k -I "2 L..J L..J 8¢ (xk)8¢ (I) ~¢k(X )~¢I(Y) kl a;k ,ii' k 1 Y q + + higher order terms (6) The differentials are evaluated in the factorized distribution q. The left-hand size of Eq. 6 is zero because of Eq. 5 and we solve for q(Si)' This factorized distribution gives the desired marginals up to the order of the expansion of ~ logp(sile). It is straightforward to compute the derivatives: 8l0gp(sile) 8<pk(Xk) 82 10gp(sile) 8<Pk (xk)8<p1 (fil) = p(xk,iilsi,e) - p(xk,ille) -p(xklsi' e)p(fillsi' e) + p(xk le)p{;ii Ie) (7) We introduce the notation ( . .. ) 8 ; and ( . .. ) as the expectation values with respect to the factorized distributions q(XISi, e) and q(xle), respectively. We define (( ... ) )8; == ( ... ) 8; ( . .. ). We obtain ~logp(sile) = 2)(~<pk))8 ; k 1 + 2 L (((~<Pk~<PI))8; (~<Pk)8; (~<P1)8; + (~<Pk) (~<Pl)) k,l + higher order terms To first order, setting Eq. 8 equal to zero we obtain o = L((~<pk)).; = (logp(x)).; -logq(si) + const., k (8) (9) where we have absorbed all terms independent of i into a constant. Thus, we find the solution (10) in which the constants Zi follow from normalisation. The first order term is equivalent to the standard mean field equations, obtained from Jensens' inequality. The correction with second order terms is obtained in the same way, again dropping terms independent of i: q(Si) = ~. exp ((IOgP(X)).; + ~ L ((~<Pk~<Pt)8; - (~<Pk).; (~<Pt)8.)) (11) t k~ were, again, the constants Zi follow from normalisation. These equations, which form the main result of this paper, are generalization of the mean field equations with TAP corrections for directed graphical models. Both left and right-hand size of Eqs. 10 and 11 depend on the unknown probability distribution q(s) and can be solved by fixed point iteration. 3 Complexity and single-overlap graphs The complexity of the first order equations Eq. 10 is exponential in the number of variables in the potentials <Pk of P: if the maximal clique size is c, then for each i we need of the order of ni exp(c) computations, where ni is the number of cliques that contain node i. The second term scales worse, since one must compute averages over the union of two overlapping cliques and because of the double sum. However, things are not so bad. First Figure 1: An example of a single-overlap graph. Left: The chest clinic model (ASIA)(Lauritzen and Spiegelhaiter, 1988). Right: nodes within one potential a re grouped together, showing that potentials share at most one node. of all, notice that the sum over k and l can be restricted to overlapping cliques (k n l =1= 0) and that i must be in either k or lor both (i E k U I). Denote by n k the number of cliques that have at least one variable in common with clique k and denote by noverlap = maxk nk Then, the sum over k and l contains not more than ninoverlap terms. Each term is an average over the union of two cliques, which can be worse case of size 2c-1 (when only one variable is shared). However, since (!l</>k!l<1>t) Si = ((!l</>k) knl !l</>l) 8i ((.) knl means expectation wrt q conditioned on the variables in k n l) we can precompute (!l</>k)knl for all pairs of overlapping cliques k, l, for all states in knl. Therefore, the worse case complexity of the second order term is less than ninoverlap exp(c). Thus, we see that the second order method has the same exponential complexity as the first order method, but with a different polynomial prefactor. Therefore, the first or second order method can be applied to directed graphical models as long as the number of parents is reasonably small. The fact that the second order term has a worse complexity than the first order term is in contrast to Boltzmann machines, in which the TAP approximation has the same complexity as the standard mean field approximation. This phenomenon also occurs for a special class of DAGs, which we call single-overlap graphs. These are graphs in which the potentials </>k share at most one node. Figure 1 shows an example of a single-overlap graph. For single overlap graphs, we can use the first order result Eq. 9 to simplify the second order correction. The derivation rather tedious and we just present the result ~. exp ((IOgP(X))8i + ~ L ((!l</>I?)8i - (!l</>l);.) • l,iEI ,~, ~ «( (8M )., 8¢'),,) (12) which has a complexity that is of order ni(c -1) exp(c). For probability distributions with many small potentials that share nodes with many other potentials, Eq. 12 is more efficient than Eq. 11. For instance, for Boltzmann Machines ni = noverlap = n - 1 and c = 2. In this case, Eq. 12 is identical to the TAP equations (Thouless et al., 1977). 4 Sigmoid belief networks In this section, we consider sigmoid belief networks as an interesting class of directed graphical models. The reason is, that one can expand in terms of the couplings instead of the potentials which is more efficient. The sigmoid belief network is defined as (13) (a) (b) (c) (d) Figure 2: Interpretation of different interaction terms appearing in Eq. 16. The open and shaded nodes are hidden and evidence nodes, respectively (except in (a), where k can be any node). Solid arrows indicate the graphical structure in the network. Dashed arrows indicate interaction terms that appear in Eq. 16. where O"(x) = (1 + exp(-2x))-1, Xi = ±1 and hi is the local field: hi(x) = 2:;=1 WijXj + 9i. We separate the variables in evidence variables e and hidden variables s: X = (s, e). When couplings from hidden nodes to either hidden or evidence nodes are zero, Wij = 0, i E e, s and j E s, the probability distributions p(sle) and p(e) reduce to p(sle) -+ q(s) = II 0" (Si9n (14) iEs (15) iEe where 9,/ = 2:jEe Wijej + 9i depends on the evidence. We expand to second order around this tractable distribution and obtain mi = tanh (L mjWik + 9i + 2 L r( -ek)ekwki - mi L(1 - m%}w;k kEs,e kEe kEs + 4mi L r(ek)r( -ek)w~i - 4 L r(ek)r( -ek)mlwklwki kEe kEe,IEs + 2 L (1 - m%}r( -el)eIWlkWki) kEs ,IEe (16) with mi = (Si}q ~ (Si}p and r is given by Eq. 15. The different terms that appear in this equation can be easily interpreted. The first term describes the lowest order forward influence on node i from its parents. Parents can be either evidence or hidden nodes (fig. 2a). The second term is the bias 9i . The third term describes to lowest order the effect of Bayes' rule: it affects mi such that the observed evidence on its children becomes most probable (fig. 2b). Note, that this term is absent when the evidence is explained by the evidence nodes themselves: r(ek) = 1. The fourth and fifth terms are the quadratic contributions to the first and third terms, respectively. The sixth term describes 'explaining away'. It describes the effect of hidden node I on node i, when both have a common observed child k (fig. 2c). The last term describes the effect on node i when its grandchild is observed (fig. 2d). Note, that these equations are different from Eq. 10. When one applies Eq. 10 to sigmoid belief networks, one requires additional approximations to compute (logO"(xihi)} (Saul et aI., 1996). 0.07 0.7 0.06 0.6 'U 0.05 0.5 ~ !l..O.O4 (/) 0.4 ~ E ::2 ; 0.03 a: 0.3 Q. u 0.02 0.2 0.01 0.1 ... ""' -I 1 1 a a a 10 20 30 a 0 .5 n, J Figure 3: Second order approximation for fully connected sigmoid belief network of n nodes. a) nodes 1, ... , nl are hidden (white) and nodes nl + 1, ... , n are clamped (grey), nl = n/2; b) CPU time for exact inference (dashed) and second order approximation (solid) versus nl (J = 0.5); c) RMS of hidden node exact marginals (solid) and RMS error of second order approximation (dashed) versus coupling strength J, (nl = 10). Since only feed-forward connections are present, one can order the nodes such that Wij = 0 for i < j. Then the first order mean field equations can be solved in one single sweep starting with node l. The full second order equations can be solved by iteration, starting with the first order solution. 5 Numerical results We illustrate the theory with two toy problems. The first one is inference in Lauritzen's chest clinic model (ASIA), defined on 8 binary variables x = {A, T, S, L, B, E, X, D} (see figure 1, and (Lauritzen and SpiegeJhaJter, 1988) for more details about the model). We computed exact marginals and approximate marginals using the approximating methods up to first (Eq. 10) and second order (Eq. 11), respectively. The approximate marginals are determined by sequential iteration of (10) and (11), starting at q(Xi) = 0.5 for all variables i. The maximal error in the marginals using the first and second order method is 0.213 and 0.061, respectively. We verified that the single-overlap expression Eq. 12 gave similar results. In fig. 3, we assess the accuracy and CPU time of the second order approximation Eq. 16 for sigmoid belief networks. We generate random fully connected sigmoid belief networks with Wij from a normal distribution with mean zero and variance J2 In and (}i = O. We observe in fig. 3b that the computation time is very fast: For nl = 500, we have obtained convergence in 37 second on a Pentium 300 Mhz processor. The accuracy of the method depends on the size of the weights and is computed for a network of nl = 10 (fig. 3c). In (Kappen and Wiegerinck, 2001), we compare this approach to Saul's variational approach (Saul et aI., 1996) and show that our approach is much faster and slightly more accurate. 6 Discussion In this paper, we computed a second order mean field approximation for directed graphical models. We show that the second order approximation gives a significant improvement over the first order result. The method does not use explicitly that the graph is directed. Therefore, the result is equally valid for Markov graphs. The complexity of the first and second order approximation is of a (ni exp ( c)) and O(ninoverlap exp(c)), respectively, with c the number of variables in the largest potential. For single-overlap graphs, one can rewrite the second order equation such that the computational complexity reduces to O(ni(c - 1) exp(c)). Boltzmann machines and the Asia network are examples of single-overlap graphs. For large c, additional approximations are required, as was proposed by (Saul et al., 1996) for the first order mean field equations. It is evident, that such additional approximations are then also required for the second order mean field equations. It has been reported (Barber and Wiegerinck, 1999; Wiegerinck and Kappen, 1999) that similar numerical improvements can be obtained by using a very different approach, which is to use an approximating distribution q that is not factorized, but still tractable. A promising way to proceed is therefore to combine both approaches and to do a second order expansion aroud a manifold of distributions with non-factorized yet tractable distributions. In this approach the sufficient statistics of the tractable structure is expanded, rather than the marginal probabilities. Acknowledgments This research was supported in part by the Dutch Technology Foundation (STW). References Barber, D. and Wiegerinck, W. (1999). Tractable variational structures for approximating graphical models. In Kearns, M., Solla, S., and Cohn, D., editors, Advances in Neural Information Processing Systems, volume 11 of Advances in Neural Information Processing Systems, pages 183- 189. MIT Press. Kappen, H. and Rodriguez, F. (1998). Efficient learning in Boltzmann Machines using linear response theory. Neural Computation, 10:1137-1156. Kappen, H. and Spanjers, J. (1999). Mean field theory for asymmetric neural networks. Physical Review E, 61 :5658-5663. Kappen, H. and Wiegerinck, W. (2001). Mean field theory for graphical models. In Saad, D. and Opper, M., editors, Advanced mean field theory. MIT Press. Lauritzen, S. and Spiegelhalter, D. (1988). Local computations with probabilties on graphical structures and their application to expert systems. J. Royal Statistical society B, 50: 154-227. Plefka, T. (1982). Convergence condition of the TAP equation for the infinite-range Ising spin glass model. Journal of Physics A, 15:1971- 1978. Saul, L., Jaakkola, T., and Jordan, M. (1996). Mean field theory for sigmoid belief networks. Journal of anificial intelligence research, 4:61-76. Thouless, D., Anderson, P., and Palmer, R. (1977). Solution of 'Solvable Model of a Spin Glass'. Phil. Mag., 35:593- 601. Wiegerinck, W. and Kappen, H. (1999). Approximations of bayesian networks through kl minimisation. New Generation Computing, 18:167- 175.
2000
75
1,878
Feature Selection for SVMs J. Weston t, S. Mukherjee tt , O. Chapelle*, M. Pontiltt T. Poggiott, V. Vapnik*,ttt t Barnhill Biolnformatics.com, Savannah, Georgia, USA. tt CBCL MIT, Cambridge, Massachusetts, USA. * AT&T Research Laboratories, Red Bank, USA. ttt Royal Holloway, University of London, Egham, Surrey, UK. Abstract We introduce a method of feature selection for Support Vector Machines. The method is based upon finding those features which minimize bounds on the leave-one-out error. This search can be efficiently performed via gradient descent. The resulting algorithms are shown to be superior to some standard feature selection algorithms on both toy data and real-life problems of face recognition, pedestrian detection and analyzing DNA micro array data. 1 Introduction In many supervised learning problems feature selection is important for a variety of reasons: generalization performance, running time requirements, and constraints and interpretational issues imposed by the problem itself. In classification problems we are given f data points Xi E ~n labeled Y E ±1 drawn i.i.d from a probability distribution P(x, y). We would like to select a subset of features while preserving or improving the discriminative ability of a classifier. As a brute force search of all possible features is a combinatorial problem one needs to take into account both the quality of solution and the computational expense of any given algorithm. Support vector machines (SVMs) have been extensively used as a classification tool with a great deal of success from object recognition [5, 11] to classification of cancer morphologies [10] and a variety of other areas, see e.g [13] . In this article we introduce feature selection algorithms for SVMs. The methods are based on minimizing generalization bounds via gradient descent and are feasible to compute. This allows several new possibilities: one can speed up time critical applications (e.g object recognition) and one can perform feature discovery (e.g cancer diagnosis). We also show how SVMs can perform badly in the situation of many irrelevant features, a problem which is remedied by using our feature selection approach. The article is organized as follows. In section 2 we describe the feature selection problem, in section 3 we review SVMs and some of their generalization bounds and in section 4 we introduce the new SVM feature selection method. Section 5 then describes results on toy and real life data indicating the usefulness of our approach. 2 The Feature Selection problem The feature selection problem can be addressed in the following two ways: (1) given a fixed m « n, find the m features that give the smallest expected generalization error; or (2) given a maximum allowable generalization error "(, find the smallest m. In both of these problems the expected generalization error is of course unknown, and thus must be estimated. In this article we will consider problem (1). Note that choices of m in problem (1) can usually can be reparameterized as choices of"( in problem (2). Problem (1) is formulated as follows. Given a fixed set of functions y = f(x, a) we wish to find a preprocessing of the data x r-t (x * 0'), 0' E {a, I} n, and the parameters a of the function f that give the minimum value of T(O', a) = f V(y,f((x*O'),a))dP(x,y) (1) subject to 110'110 = m, where P(x,y) is unknown, x * 0' = (Xl 0'1 , ... ,xnO'n) denotes an elementwise product, V (', .) is a loss functional and II . 110 is the a-norm. In the literature one distinguishes between two types of method to solve this problem: the so-called filter and wrapper methods [2]. Filter methods are defined as a preprocessing step to induction that can remove irrelevant attributes before induction occurs, and thus wish to be valid for any set of functions f(x, a). For example one popular filter method is to use Pearson correlation coefficients. The wrapper method, on the other hand, is defined as a search through the space of feature subsets using the estimated accuracy from an induction algorithm as a measure of goodness of a particular feature subset. Thus, one approximates T(O', a) by minimizing Twrap(O', a) = min Talg(O') IT (2) subject to 0' E {a, l}n where Talg is a learning algorithm trained on data preprocessed with fixed 0'. Wrapper methods can provide more accurate solutions than filter methods [9], but in general are more computationally expensive since the induction algorithm Talg must be evaluated over each feature set (vector 0') considered, typically using performance on a hold out set as a measure of goodness of fit. In this article we introduce a feature selection algorithm for SVMs that takes advantage of the performance increase of wrapper methods whilst avoiding their computational complexity. Note, some previous work on feature selection for SVMs does exist, however results have been limited to linear kernels [3, 7] or linear probabilistic models [8]. Our approach can be applied to nonlinear problems. In order to describe this algorithm, we first review the SVM method and some of its properties. 3 Support Vector Learning Support Vector Machines [13] realize the following idea: they map x E IRn into a high (possibly infinite) dimensional space and construct an optimal hyperplane in this space. Different mappings x r-t ~(x) E 1l construct different SVMs. The mapping ~ (.) is performed by a kernel function K (', .) which defines an inner product in 1l. The decision function given by an SVM is thus: f(x) = w . ~(x) + b = L a?YiK(xi, x) + b. (3) The optimal hyperplane is the one with the maximal distance (in 1l space) to the closest image ~(Xi) from the training data (called the maximal margin). This reduces to maximizing the following optimization problem: l 1 l W 2(0:) = LO:i 2 L O:iO:jYiyjK(Xi,Xj) i=1 i ,j=1 (4) under constraints 2:;=1 O:iYi = ° and O:i 2:: 0, i = 1, ... , £. For the non-separable case one can quadratically penalize errors with the modified kernel K +- K + t I where I is the identity matrix and A a constant penalizing the training errors (see [4] for reasons for this choice). Suppose that the size of the maximal margin is M and the images <I>(Xl), ... , <I>(Xl) of the training vectors are within a sphere of radius R. Then the following holds true [13]. Theorem 1 lfimages of training data of size £ belonging to a .Iphere of size R are separable with the corresponding margin M, then the expectation of the error probability has the bound 1 {R2} 1 { 2 2 O} EPerr ~ £E M2 = £E R W (0:) , (5) where expectation is taken over sets of training data of size £. This theorem justifies the idea that the performance depends on the ratio E{ R2 / M2} and not simply on the large margin M, where R is controlled by the mapping function <1>(.). Other bounds also exist, in particular Vapnik and Chapelle [4] derived an estimate using the concept of the span of support vectors. Theorem 2 Under the assumption that the set of support vectors does not change when removing the example p Epl - 1 < !E ~ \II ( o:~ -1) err £ ~ (K- 1 ) p=1 sv pp (6) where \II is the step function, Ksv is the matrix of dot products between support vectors, p~;:-; is the probability of test error for the machine trained on a sample of size £ - 1 and the expectations are taken over the random choice of the sample. 4 Feature Selection for SVMs In the problem of feature selection we wish to minimize equation (1) over u and 0:. The support vector method attempts to find the function from the set f(x, w, b) = w . <I> (x) + b that minimizes generalization error. We first enlarge the set of functions considered by the algorithm to f(x, w, b, u) = w . <I>(x * u) + b. Note that the mapping <l>u(x) = <I> (x * u) can be represented by choosing the kernel function Ku in equations (3) and (4): Ku(x, y) = K((x * u), (y * u)) = (<I>u(x) . <l>u(y)) (7) for any K . Thus for these kernels the bounds in Theorems (1) and (2) still hold. Hence, to minimize T(U, 0:) over 0: and u we minimize the wrapper functional Twrap in equation (2) where Talg is given by the equations (5) or (6) choosing a fixed value of u implemented by the kernel (7). Using equation (5) one minimizes over u: R2W2(U) = R2(U)W2(o:O, u) (8) where the radius R for kernel Ku can be computed by maximizing (see, e.g [13]): (9) subject to L:i f3i = 1, f3i ~ 0, i = 1, ... , f, and W2(aO, 0") is defined by the maximum of functional (4) using kernel (7). In a similar way, one can minimize the .span bound over 0" instead of equation (8). Finding the minimum of R 2W 2 over 0" requires searching over all possible subsets of n features which is a combinatorial problem. To avoid this problem classical methods of search include greedily adding or removing features (forward or backward selection) and hill climbing. All of these methods are expensive to compute if n is large. As an alternative to these approaches we suggest the following method: approximate the binary valued vector 0" E {O, 1}n, with a real valued vector 0" E ]Rn . Then, to find the optimum value of 0" one can minimize R 2W 2 , or some other differentiable criterion, by gradient descent. As explained in [4] the derivative of our criterion is: aR2W2(0") R2( )aW2(aO,0") W2( 0 )aR2(0") = 0" a + a ,fI a aO"k O"k O"k (10) aR2(0") (11) (12) We estimate the minimum of 7(0", a) by minimizing equation (8) in the space 0" E ]Rn using the gradients (10) with the following extra constraint which approximates integer programming: (13) subject to L:i O"i = m, O"i ~ 0, i = 1, ... ,f. For large enough), as p -+ ° only m elements of 0" will be nonzero, approximating optimization problem 7(0", a). One can further simplify computations by considering a stepwise approximation procedure to find m features. To do this one can minimize R 2W 2(0") with 0" unconstrained. One then sets the q « n smallest values of 0" to zero, and repeats the minimization until only m nonzero elements of 0" remain. This can mean repeatedly training a SVM just a few times, which can be fast. 5 Experiments 5.1 Toy data We compared standard SVMs, our feature selection algorithms and three classical filter methods to select features followed by SVM training. The three filter methods chose the m largest features according to: Pearson correlation coefficients, the Fisher criterion score1, and the Kolmogorov-Smirnov test2). The Pearson coefficients and Fisher criterion cannot model nonlinear dependencies. In the two following artificial datasets our objective was to assess the ability of the algorithm to select a small number of target features in the presence of irrelevant and redundant features. 1 F( r) = 1 i, -1£; 21 , where 1-'; is the mean value for the r-th feature in the positive and negative U r +U r classes and 0"; 2 is the standard deviation 2KStst(r) = Vl sup (P{X :::; fr} - PiX :::; fr, Yr = I}) where fr denotes the r-th feature from each training example, and P is the corresponding empirical distribution. Linear problem Six dimensions of 202 were relevant. The probability of y = 1 or -1 was equal. The first three features {Xl,X2,X3} were drawn as Xi = yN(i,l) and the second three features {X4, X5, X6} were drawn as Xi = N(O, 1) with a probability of 0.7, otherwise the first three were drawn as Xi = N(O, 1) and the second three as Xi = yN(i - 3, 1). The remaining features are noise Xi = N(O, 20), i = 7, ... ,202. Nonlinear problem Two dimensions of 52 were relevant. The probability of y = 1 or -1 was equal. The data are drawn from the following: if y = -1 then {Xl, X2} are drawn from N(JLl, 1;) or N(JL2, 1;) with equal probability, JLl = {-£, -3} and JL2 = ii, 3} and 1; = I , if Y = 1 then {Xl, xd are drawn again from two normal distributions with equal probability, with JLl = {3, -3} and JL2 = {-3, 3} and the same 1; as before. The rest of the features are noise Xi = N(O, 20), i = 3, .. . ,52. In the linear problem the first six features have redundancy and the rest of the features are irrelevant. In the nonlinear problem all but the first two features are irrelevant. We used a linear SVM for the linear problem and a second order polynomial kernel for the nonlinear problem. For the filter methods and the SVM with feature selection we selected the 2 best features. The results are shown in Figure (1) for various training set sizes, taking the average test error on 500 samples over 30 runs of each training set size. The Fisher score (not shown in graphs due to space constraints) performed almost identically to correlation coefficients. In both problems standard SVMs perform poorly: in the linear example using £ = 500 points one obtains a test error of 13% for SVMs, which should be compared to a test error of 3% with £ = 50 using our methods. Our SVM feature selection methods also outperformed the filter methods, with forward selection being marginally better than gradient descent. In the nonlinear problem, among the filter methods only the Kolmogorov-Smirnov test improved performance over standard SVMs. 0 . 7 0 . 6 0 . 5 o Span-Bound & Forward Sel ection --RW-Bound & Gradient x Standard SVMs Correlation Coefficients - ~ Ko l moqorov-Srnirnov Test , ' , , , , 0.3 \ \ , ' , ' 0 . 2 'b \, '. , 0.41\ ' \ ~~--~--~~----~ 0 .1 ~Q_ - --o-- -- - - --o- - 20 40 60 80 1 00 (a) 0.7 0 . 6 o Span- Bound & Forward Se l ection --RW-Bound & Gradi ent x Standard SVMs Correlation Coefficients - ~ Ko l rnoqorov-Smirnov Test O. 5~:::;:========~=::=::;~===~ 0.4 0 . 3 0 . 2 , t!J ~o.... _ _ B __ ___ _ 8_ _ ______ _ 0 .1 o ~'~~ -- ---- - o - -- - - -- __ _ _ 20 40 60 80 100 (b) Figure 1: A comparison of feature selection methods on (a) a linear problem and (b) a nonlinear problem both with many irrelevant features. The x-axis is the number of training points, and the y-axis the test error as a fraction of test points. 5.2 Real-life data For the following problems we compared minimizing R2W 2 via gradient descent to the Fisher criterion score. Face detection The face detection experiments described in this section are for the system introduced in [12, 5]. The training set consisted of 2, 429 positive images offrontal faces of size 19x 19 and 13,229 negative images not containing faces. The test set consisted of 105 positive images and 2, 000, 000 negative images. A wavelet representation of these images [5] was used, which resulted in 1,740 coefficients for each image. Performance of the system using all coefficients, 725 coefficients, and 120 coefficients is shown in the ROC curve in figure (2a). The best results were achieved using all features, however R2W 2 outperfomed the Fisher score. In this case feature selection was not useful for eliminating irrelevant features, but one could obtain a solution with comparable performance but reduced complexity, which could be important for time critical applications. Pedestrian detection The pedestrian detection experiments described in this section are for the system introduced in [11]. The training set consisted of 924 positive images of people of size 128x64 and 10, 044 negative images not containing pedestrians. The test set consisted of 124 positive images and 800, 000 negative images. A wavelet representation of these images [5, 11] was used, which resulted in 1,326 coefficients for each image. Performance of the system using all coefficients and 120 coefficients is shown in the ROC curve in figure (2b). The results showed the same trends that were observed in the face recognition problem. l~~"j 10 " 10 ' 10 ' ". FalsoPositiveRato Falso PositillO Rillo (a) (b) Figure 2: The solid line is using all features, the solid line with a circle is our feature selection method (minimizing R2W 2 by gradient descent) and the dotted line is the Fisher score. (a)The top ROC curves are for 725 features and the bottom one for 120 features for face detection. (b) ROC curves using all features and 120 features for pedestrian detection. Cancer morphology classification For DNA micro array data analysis one needs to determine the relevant genes in discrimination as well as discriminate accurately. We look at two leukemia discrimination problems [6, 10] and a colon cancer problem [1] (see also [7] for a treatment of both of these problems). The first problem was classifying myeloid and lymphoblastic leukemias based on the expression of 7129 genes. The training set consists of 38 examples and the test set of 34 examples. Using all genes a linear SVM makes 1 error on the test set. Using 20 genes a errors are made for R2W2 and 3 errors are made using the Fisher score. Using 5 genes 1 error is made for R2W 2 and 5 errors are made for the Fisher score. The method of [6] performs comparably to the Fisher score. The second problem was discriminating B versus T cells for lymphoblastic cells [6]. Standard linear SVMs make 1 error for this problem. Using 5 genes a errors are made for R 2W 2 and 3 errors are made using the Fisher score. In the colon cancer problem [1] 62 tissue samples probed by oligonucleotide arrays contain 22 normal and 40 colon cancer tissues that must be discriminated based upon the expression of 2000 genes. Splitting the data into a training set of 50 and a test set of 12 in 50 separate trials we obtained a test error of 13% for standard linear SVMs. Taking 15 genes for each feature selection method we obtained 12.8% for R 2W 2 , 17.0% for Pearson correlation coefficients, 19.3% for the Fisher score and 19.2% for the Kolmogorov-Smirnov test. Our method is only worse than the best filter method in 8 of the 50 trials. 6 Conclusion In this article we have introduced a method to perform feature selection for SVMs. This method is computationally feasible for high dimensional datasets compared to existing wrapper methods, and experiments on a variety of toy and real datasets show superior performance to the filter methods tried. This method, amongst other applications, speeds up SVMs for time critical applications (e.g pedestrian detection), and makes possible feature discovery (e.g gene discovery). Secondly, in simple experiments we showed that SVMs can indeed suffer in high dimensional spaces where many features are irrelevant. Our method provides one way to circumvent this naturally occuring, complex problem. References [1] U. Alon, N. Barkai, D. Notterman, K. Gish, S. Ybarra, D. Mack, and A. Levine. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon cancer tissues probed by oligonucleotide arrays. Cell Biology, 96:6745- 6750, 1999. [2] A. Blum and P. Langley. Selection of relevant features and examples in machine learning. Artijicialintelligence, 97:245- 271" 1997. [3] P. S. Bradley and O. L. Mangasarian. Feature selection via concave minimization and support vector machines. In Proc. 13th International Conference on Machine Learning, pages 82- 90, San Francisco, CA, 1998. [4] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukhetjee. Choosing kernel parameters for support vector machines. Machine Learning, 2000. [5] T. Evgeniou, M. Ponti!, C. Papageorgiou, and T. Poggio. Image representations for object detection using kernel classifiers. In Asian Conference on Computer Vision, 2000. [6] T. Golub, D. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. Mesirov, H. Coller, M. Loh, J. Downing, M. Caligiuri, C. D. Bloomfield, and E. S. Lander. Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science, 286:531537, 1999. [7] I. Guyon, J. Weston, S. Barnhill, and V. Vapnik. Gene selection for cancer classification using support vector machines. Machine Learning, 2000. [8] T. Jebara and T. Jaakkola. Feature selection and dualities in maximum entropy discrimination. In Uncertainity In Artijiciallntellegence, 2000. [9] J. Kohavi. Wrappers for feature subset selection. All issue on relevance, 1995. [10] S. Mukhetjee, P. Tamayo, D. Slonim, A. Verri, T. Golub, J. Mesirov, and T. Poggio. Support vector machine classification of micro array data. AI Memo 1677, Massachusetts Institute of Technology, 1999. [11] M. Oren, C. Papageorgiou, P. Sinha, E. Osuna, and T. Poggio. Pedestrian detection using wavelet templates. In Proc. Computer Vision and Pattern Recognition, pages 193- 199, Puerto Rico, June 16- 20 1997. [12] C. Papageorgiou, M. Oren, and T. Poggio. A general framework for object detection. In International Conference on Computer Vision, Bombay, India, January 1998. [13] V. Vapnik. Statistical Learning Theory. John Wiley and Sons, New York, 1998.
2000
76
1,879
Direct Classification with Indirect Data Timothy X Brown Interdisciplinary Telecommunications Program Dept. of Electrical and Computer Engineering University of Colorado, Boulder, 80309-0530 timxb~colorado.edu Abstract We classify an input space according to the outputs of a real-valued function. The function is not given, but rather examples of the function. We contribute a consistent classifier that avoids the unnecessary complexity of estimating the function. 1 Introduction In this paper, we consider a learning problem that combines elements of regression and classification. Suppose there exists an unknown real-valued property of the feature space, p(¢), that maps from the feature space, ¢ ERn, to R. The property function and a positive set A c R, define the desired classifier as follows: C*(¢) = { ~~ if p(¢) E A otherwise (1) Though p(¢) is unknown, measurements, p" associated with p(¢) at different features, ¢, are available in a data set X = {(¢i,P,i)} of size IXI = N. Each sample is i.i.d. with unknown distribution f(¢,p,). This data is indirect in that p, may be an input to a sufficient statistic for estimating p( ¢) but in itself does not directly indicate C*(¢) in (1). Figure 1 gives a schematic of the problem. Let Cx(¢) be a decision function mapping from Rn to {-I, I} that is estimated from the data X. The estimator, Cx(¢) is consistent if, lim P{Cx (¢) i- C*(¢)} = O. IXI-+oo (2) where the probabilities are taken over the distribution f. This problem arises in controlling data networks that provide quality of service guarantees such as a maximum packet loss rate [1]-[8]. A data network occasionally drops packets due to congestion. The loss rate depends on the traffic carried by the network (i.e. the network state). The network can not measure the loss rate directly, but can collect data on the observed number of packets sent and lost at different network states. Thus, the feature space, ¢, is the network state; the property function, p(¢), is the underlying loss rate; the measurements, p" are the observed p(if» A Figure 1: The classification problem. The classifier indicates whether an unknown if> function, p(¢), is within a set of interest, A. The learner is only given the data "x" . packet losses; the positive set, A, is the set of loss rates less than the maximum lossrate; and the distribution, f, follows from the arrival and departures processes of the traffic sources. In words, this application seeks a consistent estimator of when the network can and can not meet the packet loss rate guarantee based on observations of the network losses. Over time, the network can automatically collect a large set of observations so that consistency guarantees the classifier will be accurate. Previous authors have approached this problem. In [6, 7], the authors estimate the property function from X as, N¢) and then classify via C(¢) = { ~~ if p(¢) E A otherwise. (3) The approach suffers two related disadvantages. First, an accurate estimate of the property function may require many more parameters than the corresponding classifier in which only the decision boundary is important. Second, the regression requires many samples over the entire range of ¢ to be accurate, while the fewer parameters in the classifier may require fewer samples for the same accuracy. A second approach, used in [4, 5, 8], makes a single sample estimate, p(¢i) from fJ-i and estimates the desired output class as if p(¢i) E A otherwise. (4) This forms a training set Y = {¢i' oil for standard classification. This was shown to lead to an inconsistent estimator in the data network application in [1]. This paper builds on earlier results by the author specific to the packet network problem [1, 2, 3] and defines a general framework for mapping the indirect data into a standard supervised learning task. It defines conditions on the training set, classifier, and learning objective to yield consistency. The paper defines specific methods based on these results and provides examples of their application. 2 Estimator at a Single Feature In this section, we consider a single feature vector ¢ and imagine that we can collect as much monitoring data as we like at ¢. We show that a consistent estimator of the property function, p(¢), yields a consistent estimator of the optimal classification, C*(¢), without directly estimating the property function. These results are a basis for the next section where we develop a consistent classifier over the entire feature space even if every ¢i in the data set is distinct. Given the data set X = {¢, ltd, we hypothesize that there is a mapping from data set to training set Y = {¢, Wi, od such that IXI = IYI and IXI Cx(¢) = sign(L WiOi) (5) i=1 is consistent in the sense of (2). The Wi and 0i are both functions of /ti, but for simplicity we will not explicitly denote this. Do any mappings from X to Y yield consistent estimators of the form (5)? We consider only thresholds on p(¢). That is, sets A in the form A = [-00,7) (or similarly A = (7,00]) for some threshold 7. Since most practical sets can be formed from finite union, intersection, and complements of sets in this form, this is sufficient. Consider an estimator fix that has the form (6) for some functions a > 0, and estimator (3. Suppose that fix is a consistent estimator of p(¢), i.e. for every E > 0: lim P {Ifix - p(¢)1 > E} = O. (7) I XI~oo For threshold sets such as A = [-00,7), we can use (6) to construct the classifier: ( IXI ) Cx(¢) = sign(7 - fix (¢)) = sign ~(a(/ti)7 - (3(/ti)) where la(/ti)7 - (3 (/ti) I sign(a(/ti)7 - (3(/ti)) IXI = sign(L WiOi) i=1 (8) (9) (10) If 17 - p(¢)1 = E then the above estimator can be incorrect only if lfix - p(¢)1 > E. The consistency in (7) guarantees that (8)-(10) is consistent if E > O. The simplest example of (6) is when /ti is a noisy unbiased sample of p(¢i). The natural estimator is just the average of all the /ti, i.e. a(/ti) = 1 and (3(/ti) = /ti. In this case, Wi = 17 - ltd and 0i = sign(7 - /ti). A less trivial example will be given later in the application section of the paper. We now describe a range of objective functions for evaluating a classifier C( ¢; 0) parameterized by 0 and show a correspondence between the objective minimum and (5). Consider the class of weighted L-norm objective functions (L > 0): ( IXI )11L J(X, 0) = ~ wiIC(¢; 0) - oilL Let the 0 that minimizes this be denoted O(X). Let Cx(¢) = C(¢; O(X)) (11) (12) For a single ¢, C(¢;O) is a constant +1 or -1. We can simply try each value and see which is the minimum to find Cx (¢). This is carried out in [3] where we show: Theorem 1 When C(¢;O) is a constant over X then the Cx(¢) defined by {11} and (12) is equal to the Cx(¢) defined by (5). The definition in (5) is independent of L. So, we can choose any L-norm as convenient without changing the solution. This follows since (11) is essentially a weighted count of the errors. The L-norm has no significant effect. This section has shown how regression estimators such as (6) can be mapped via (9) and (10) and the objective (11) to a consistent classifier at a single feature. The next section considers general classifiers. 3 Classification over All Features This section addresses the question of whether there exist any general approach to supervised learning that leads to a consistent estimator across the feature space. Several considerations are important. First, not all feature vectors, 4>, are relevant. Some 4> may have zero probability associated with them from the distribution f (4), f..L). Such 4> we denote as unsupported. The optimal and learned classifier can differ on unsupported feature vectors without affecting consistency. Second, the classifier function C(4),O) may not be able to represent the consistent estimator. For instance, a linear classifier may never yield a consistent estimator if the optimal classifier, C* (4)), decision boundary is non-linear. Classifier functions that can represent the optimal classifier for all supported feature vectors we denote as representative. Third, the optimal classifier is discontinuous at the decision boundary. A classifier that considers any small region around a feature on the decision boundary will have both positive and negative samples. In general, the resulting classifier could be + 1 or -1 without regard to the underlying optimal classifier at these points and consistency can not be guaranteed. These considerations are made more precise in Appendix A. Taking these considerations into account and defining Wi and 0i as in (9) and (10) we get the following theorem: Theorem 2 If the classifier (5) is a consistent estimator for every supported nonboundary 4>, and C(4); 0) is representative, then the O(X)) that minimizes (11) yields a consistent classifier over all supported 4> not on the decision boundary. Theorem 2 tells us that we can get consistency across the feature space. This result is proved in Appendix A. 4 Application This section provides an application of the results to better illustrate the methodology. For brevity, we include only a simple stylized example (see [3] for a more realistic application). We describe first how the data is created, then the form of the consistent estimator, and then the actual application of the learning method. The feature space is one dimensional with 4> uniformly distributed in (3,9). The underlying property function is p(4)) = 10-4>. The measurement data is generated as follows. For a given 4>i, 8i is the number of successes in Ti = 105 Bernoulli trials with success probability P(4)i). The monitoring data is thus, f..Li = (8i' Ti). The positive set is A = (0, r) with r = 10-6 , and IXI = 1000 samples. As described in Section 1, this kind of data appears in packet networks where the underlying packet loss rate is unknown and the only monitoring data is the number of packets dropped out of Ti trials. The Bernoulli trial successes correspond to dropped packets. The feature vector represents data collected concurrently that indicates the network state. Thus the classifier can decide when the network will and will not meet a packet loss rate guarantee. 0.001 0.0001 le-05 ~ ~ le-06 en en 0 .....:l le-07 le-08 le-09 3 sample loss rate + -»+H- + + true loss rate threshold + -tItI-"llnrnll' III 111 111 II I ++ sample-based . ". + .m ........................ m :::EJ<:())]!;ist~)]t ..... . ' . ........................ , ............... " . ..... , .. '. 4 5 6 7 8 Feature ". '. " . '. 9 Figure 2: Monitoring data, true property function, and learned classifiers in the loss-rate classification application. The monitoring data is shown as sample loss rate as a function of feature vector. Sample loss-rates of zero are arbitrarily set to 10-9 for display purposes. The true loss rate is the underlying property function. The consistent and sample-based classifier results are shown as a a range of thresholds on the feature. An x and y error range is plotted as a box. The x error range is the 10th and 90th percentile of 1000 experiments. This is mapped via the underlying property function to a y-error range. The consistent classifier finds thresholds around the true value. The sample-based is off by a factor of 7. Figure 2 shows a sample of data. A consistent estimator in the form of (6) is: A Li Si px = Li Ti ' (13) Defining Wi and 0i as in (9) and (10) the classifier for our data set is the threshold on the feature space that minimizes (11). This classifier is representative since p(¢) is monotonic. The results are shown in Figure 2 and labeled "consistent". This paper's methods find a threshold on the feature that closely corresponds to the r = 10-6 threshold. As a comparison we also include a classifier that uses Wi = 1 for all i and sets 0i to the single-sample estimate, p(¢i) = silTi , as in (4). The results are labeled "sample-based". This method misses the desired threshold by a factor of 7. This application shows the features of the paper's methods. The classifier is a simple threshold with one parameter. Estimating p(¢) to derive a classifier required lO's of parameters in [6, 7]. The results are consistent unlike the approaches in [4, 5, 8]. 5 Conclusion This paper has shown that using indirect data we can define a classifier that directly uses the data without any intermediate estimate of the underlying property function. The classifier is consistent and yields a simpler learning problem. The approach was demonstrated on a problem from telecommunications. Practical details such as choosing the form of the parametric classifier, C(¢i(}), or how to find the global minimum of the objective function (11) are outside the scope of this paper. Two Dimensional Feature Space C(rjJ;(}~ Decision Both Classifiiers Bounpar ."IA";-.-.-.-.~Fal -;T.-;-sei'-.-.-.-.-.-\ Both Classifiers Positive oun ary Figure 3: A classifier 0 ( </>; ()) and the optimal classifier 0* (</» create four different sets in feature space: where they agree and are positive; where they agree and are negative; where they disagree and O*(</» = +1 (false negatives); and where they disagree and O*(</» = -1 (false positives). A Appendix: Consistency of Supervised Learning This appendix proves certain natural conditions on a supervised learner lead to a consistent classifier (Theorem 2). First we need to formally define several concepts. Since the feature space is real, it is a metric space with measure m. A feature vector </> is supported by the distribution f if every neighborhood around </> has positive probability. A feature vector </> is on the decision boundary if in every neighborhood around </> there exists supported </>', </>" such that 0* (</>') i' 0* (</>"). A classifier function, 0 (</>; ()) is representative if there exists a ()* such that 0 (</>; ()*) = O*(</» for all supported, non-boundary </>. Parameters () and ()' are equivalent if for all supported, non-boundary </>; O(</>; ()) = O(</>; ()'). Given a (), it is either equivalent to ()* or there are supported, non-boundary </> where O(</>j()) is not equal to the optimal classifier as in Figure 3. We will show that for any () not equivalent to ()*, lim P{J(X, ()) ::; J(X, ()*)} = 0 IXI--+oo (14) In other words, such a () can not be the minimum of the objective in (11) and so only a () equivalent to ()* is a possible minimum. To prove Theorem 2, we need to introduce a further condition. An estimator of the form (5) has uniformly bounded variance if Var(wi) < B for some fixed B < 00 for all </>. Let E[w(</»o(</»] = e(</» be the expected weighted desired output for independent samples at </> where the expectation is from 1(",,1</». To start, we note that if (5) is consistent, then: sign(e(</») = O*(</» (15) for all non-boundary states. Looking at Figure 3, let us focus on the false negative set minus the optimal decision boundary, call this cJ>. From (15), e(</» is positive for every </> E cJ>. Let x be the probability measure of cJ>. Define the set cJ> € = {</>I</> E cJ> and e (</» ~ E}. Let x€ be the probability measure of cJ>€. Choose E > 0 so that x€ > O. The proof is straight forward from here and we omit some details. With (), C( ¢; ()) = -1 for all ¢ E cI>. With ()*, C (¢; ()*) = + 1 for all ¢ E cI>. Since the minimum of a constant objective function satisfies (5), we would incorrectly choose () if IXI lim LWioi < 0 IXI--+oo i=l For the false negatives the expected number of examples in cI> and cI>e is xlXI and xelXI. By the definition of cI>e and the bounded variance of the weight, we get that IXI E[L WiOi] ~ ExelXI (16) i=l IXI Var[L WiOi] < BxIXI· (17) i=l Since the expected value grows linearly with the sample size and the standard deviation with the square root of the sample size, as IXI --t 00 the weighted sum will with probability one be positive. Thus, as the sample size grows, + 1 will minimize the objective function for the set of false negative samples and the decision boundary from ()* will minimize the objective. The same argument applied to the false positives shows that ()* will minimize the false positives with probability one. Thus ()* will be chosen with probability one and the theorem is shown. Acknowledgments This work was supported by NSF CAREER Award NCR-9624791. References [1] Brown, T.X (1995) Classifying loss rates with small samples, Proc. Inter. Workshop on Appl. of NN to Telecom (pp. 153- 161). Hillsdale, NJ: Erlbaum. [2] Brown, T.X (1997) Adaptive access control applied to ethernet data, Advances in Neural Information Processing Systems, 9 (pp. 932- 938). MIT Press. [3] Brown, T. X (1999) Classifying loss rates in broadband networks, INFOCOMM '99 (v. 1, pp. 361- 370). Piscataway, NJ: IEEE. [4] Estrella, A.D., et al. (1994). New training pattern selection method for ATM call admission neural control, Elec. Let., v. 30, n. 7, pp. 577- 579. [5] Hiramatsu, A. (1990). ATM communications network control by neural networks, IEEE T. on Neural Networks, v. 1, n. 1, pp. 122- 130. [6] Hiramatsu, A. (1995). Training techniques for neural network applications in ATM, IEEE Comm. Mag., October, pp. 58-67. [7] Tong, H., Brown, T. X (1998). Estimating Loss Rates in an Integrated Services Network by Neural Networks, Proc. of Global Telecommunications Conference (GLOBECOM 98) (v. 1, pp. 19- 24) Piscataway, NJ: IEEE. [8] Tran-Gia, P., Gropp, O. (1992). Performance of a neural net used as admission controller in ATM systems, Proc. GLOBECOM 92 (pp. 1303- 1309). Piscataway, NJ: IEEE.
2000
77
1,880
Constrained Independent Component Analysis Wei Lu and Jagath C. Rajapakse School of Computer Engineering Nanyang Technological University, Singapore 639798 email: asjagath@ntu.edu.sg Abstract The paper presents a novel technique of constrained independent component analysis (CICA) to introduce constraints into the classical ICA and solve the constrained optimization problem by using Lagrange multiplier methods. This paper shows that CICA can be used to order the resulted independent components in a specific manner and normalize the demixing matrix in the signal separation procedure. It can systematically eliminate the ICA's indeterminacy on permutation and dilation. The experiments demonstrate the use of CICA in ordering of independent components while providing normalized demixing processes. Keywords: Independent component analysis, constrained independent component analysis, constrained optimization, Lagrange multiplier methods 1 Introduction Independent component analysis (ICA) is a technique to transform a multivariate random signal into a signal with components that are mutually independent in complete statistical sense [1]. There has been a growing interest in research for efficient realization of ICA neural networks (ICNNs). These neural algorithms provide adaptive solutions to satisfy independent conditions after the convergence of learning [2, 3, 4]. However, ICA only defines the directions of independent components. The magnitudes of independent components and the norms of demixing matrix may still be varied. Also the order of the resulted components is arbitrary. In general, ICA has such an inherent indeterminacy on dilation and permutation. Such indetermination cannot be reduced further without additional assumptions and constraints [5]. Therefore, constrained independent component analysis (CICA) is proposed as a way to provide a unique ICA solution with certain characteristics on the output by introducing constraints: • To avoid the arbitrary ordering on output components: statistical measures give indices to sort them in order, and evenly highlight the salient signals. • To produce unity transform operators: normalization of the demixing channels reduces dilation effect on resulted components. It may recover the exact original sources. With such conditions applied, the ICA problem becomes a constrained optimization problem. In the present paper, Lagrange multiplier methods are adopted to provide an adaptive solution to this problem. It can be well implemented as an iterative updating system of neural networks, referred to ICNNs. Next section briefly gives an introduction to the problem, analysis and solution of Lagrange multiplier methods. Then the basic concept of ICA will be stated. And Lagrange multiplier methods are utilized to develop a systematic approach to CICA. Simulations are performed to demonstrate the usefulness of the analytical results and indicate the improvements due to the constraints. 2 Lagrange Multiplier Methods Lagrange multiplier methods introduce Lagrange multipliers to resolve a constrained optimization iteratively. A penalty parameter is also introduced to fit the condition so that the local convexity assumption holds at the solution. Lagrange multiplier methods can handle problems with both equality and inequality constraints. The constrained nonlinear optimization problems that Lagrange multiplier methods deal take the following general form: minimize f(X), subject to g(X) ~ 0, h(X) = ° (1) where X is a matrix or a vector of the problem arguments, f(X) is an objective function, g(X) = [9l(X)··· 9m(X)jT defines a set of m inequality constraints and h(X) = [hl (X) ... hn(X)jT defines a set of n equality constraints. Because Lagrangian methods cannot directly deal with inequality constraints 9i(X) ~ 0, it is possible to transform inequality constraints into equality constraints by introducing a vector of slack variables z = [Zl ... zmjT to result in equality constraints Pi(X) = 9i(X) + zl = 0, i = 1· .. m. Based on the transformation, the corresponding simplified augmented Lagrangian function for problem (1) is defined as: where f-L = [f-Ll ... f-LmjT and A = [Al ... AnjT are two sets of Lagrange multipliers, "I is the scalar penalty parameter, 9i(X) equals to f-Li+"I9i(X), 11·11 denotes Euclidean norm, and !"III . 112 is the penalty term to ensure that the optimization problem is held at the condition of local convexity assumption: 'V5cx£ > 0. We use the augmented Lagrangian function in this paper because it gives wider applicability and provides better stability [6]. For discrete problems, the changes in the augmented Lagrangian function can be defined as ~x£(X, f-L, A) to achieve the saddle point in the discrete variable space. The iterative equations to solve the problem in eq.(2) are given as follows: X(k + 1) = X(k) ~x£(X(k),f-L(k),A(k)) f-L(k + 1) = f-L(k) + "Ip(X(k)) = max{O,g(X(k))} A(k + 1) = A(k) + "Ih(X(k)) where k denotes the iterative index and g(X(k)) = f-L(k) + "I g(X(k)). (3) 3 Unconstrained ICA Let the time varying input signal be x = (Xl, X2, . .. , XN)T and the interested signal consisting of independent components (ICs) be c = (CI, C2, ... , CM) T, and generally M ~ N. The signal x is considered to be a linear mixture of independent components c: x = Ac, where A is an N x M mixing matrix with full column rank. The goal of general rcA is to obtain a linear M x N demixing matrix W to recover the independent components c with a minimal knowledge of A and c, normally M = N. Then, the recovered components u are given by u = Wx. In the present paper, the contrast function used is the mutual information (M) of the output signal which is defined in the sense of variable's entropy to measure the independence: (4) where H(Ui) is the marginal entropy of component Ui and H(u) is the output joint entropy. M has non-negative value and equals to zero when components are completely independent. While minimizing M, the learning equation for demixing matrix W to perform rcA is given by [1]: ~ W ex W-T + <I»(u)xT (5) where W-T is the transpose of the inverse matrix W- l and <I»(u) is a nonlinear function depending on the activation functions of neurons or p.d.f. of sources [1]. With above assumptions, the exact components c are indeterminant because of possible dilation and permutation. The independent components and the columns of A and the rows of W can only be estimated up to a multiplicative constant. The definitions of normal ICA imply no ordering of independent components [5]. 4 Constrained ICA In practice, the ordering of independent components is quite important to separate non-stationary signals or interested signals with significant statistical characters. Eliminating indeterminacy in the permutation and dilation is useful to produce a unique ICA solution with systematically ordered signals and normalized demixing matrix. This section presents an approach to CICA by enhancing classical ICA procedure using Lagrange multiplier methods to obtain unique ICs. 4.1 Ordering of Independent Components The independent components are ordered in a descent manner according to a certain statistical measure defined as index I (u). The constrained optimization problem to CICA is then defined as follows: minimize Mutual Information M(W) subject to g(W) ~ 0, g(W) = [gl(W)··· gM_I(W)]T (6) where g(W) is a set of (M - 1) inequality constraints, gi(W) = I(Ui+I) - I(Ui) defines the descent order and I(Ui) is the index of some statistical measures of output components Ui, e.g. variance, normalized kurtosis. Using Lagrange multiplier methods, the augmented Lagrangian function is defined based on eq.(2) as: 1 M-1 C(W,/i) = M(W) + 2 L ([max{O,Yi(W)W - /in 'Y i=1 (7) With discrete solutions applied, the changes of individual element Wij can be formulated by minimizing eq.(7): LlWij oc LlWi·C(W(k),/i(k)) = minM(W(k)) + [max{0'Yi_1(W(k))} J Wij -max{O'Yi(W(k))}] I'(Ui(k)) Xj (8) where I' (.) is the first derivative of index measure. The iterative equation for finding individual multipliers /ii is /ii(k + 1) = max{O, /ii(k) + 'Y [I(Ui+1 (k)) - I(Ui(k))]} (9) With the learning equation of normal ICNN given in (5) and the multiplier /ii'S iterative equation (9), the iterative procedure to determine the demixing matrix W is given as follows: LlW oc LlwC(W,/i) = W-T + v(u)xT (10) where v(u) = <p1(ud - /i1I'(U1) <P2(U2) + (/i1 - /i2)I'(U2) <PM-1(UM-1) + (/iM-2 - /iM-dI' (UM-1) <PM(UM) + /iM-1I'(UM) We apply measures of variance and kurtosis as examples to emerge the ordering among the signals. Then the functions I and corresponding first-derivative I' become as below. variance: ~r(Ui) = 2E{ud kurtosis: I 4u~ Ikur(Ui) = E{u~P 4E{ut}Ui E{urP (11) (12) The signal with the most variance shows the majority of information that input signals consist of. The ordering based on variance sorts the components in information magnitude that needs to reconstruct the original signals. However, it should be used accompanying with other preprocessing or constraints, such as PCA or normalization, because the normal ICA's indeterminacy on dilation of demixing matrix may cause the variance of output components being amplified or reduced. Normalized kurtosis is a kind of 4th-order statistical measure. The kurtosis of a stationary signal to be extracted is constant under the situation of indeterminacy on signals' amplitudes. Kurtosis shows the high order statistical character. Any signal can be categorized into super-Gaussian, Gaussian and sub-Gaussianly distributed ones by using kurtosis. The components are ordered in the distribution of sparseness (i.e. super-Gaussian) to denseness (i.e. sub-Gaussian). Kurtosis has been widely used to produce one-unit ICA [7]. In contrast to their sequential extraction, our approach can extract and order the components in parallel. 4.2 Normalization of Demixing Matrix The definition of ICA implies an indeterminacy in the norm of the mixing and demixing matrix, which is in contrast to, e.g. PCA. Rather than the unknown mixing matrix A was to be estimated, the rows of the demixing matrix W can be normalized by applying a constraint term in the ICA energy function to establish a normalized demixing channel. The constrained ICA problem is then defined as follows: minimize Mutual Information M(W) subject to h(W) = [h1(W)··· hM(W)F = 0 (13) where h(W) defines a set of M equality constraints, hi(Wi) = W[Wi 1 (i = 1,···, M), which define the row norms of the demixing matrix W equal to 1. Using Lagrange multiplier methods, the augmented Lagrangian function is defined based on eq.(2) as: 'c(W,A) = M(W) + ATdiag[WWT - I] + ~'Y 1 1 diag[WWT - 1]11 2 (14) where diag[·] denotes the operation to select the diagonal elements in the square matrix as a vector. By applying discrete Lagrange multiplier method, the iterative equation minimizing the augmented function for individual multiplier Ai is Ai (k + 1) = Ai ( k) + 'Y (w [ Wi - 1) and the iterative equation of demixing matrix W is given as follows: ~W ex ~w'c(W,A) = W-T + ~(u)xT + O(W) where Oi(Wi) = 2Aiw; (15) (16) Let assume c is the normalized source with unit variance such that E{ ccT} = I, and the input signal x is processed by a prewhitening matrix P such that p = Px obeys E{ppT} = I. Then with the normalized demixing matrix W, the network output u contains exact independent components with unit magnitude, i.e. Ui contains one ±Cj for some non-duplicative assignment j -t i. 5 Experiments and Results The CICA algorithms were simulated in MATLAB version 5. The learning procedure ran 500 iterations with certain learning rate. All signals were preprocessed by a whitening process to have zero mean and uniform variance. The accuracy of the recovered components compared to the source components was measured by the signal to noise ratio (SNR) in dB, where signal power was measured by the variance of the source component, and noise was the mean square error between the sources and recovered ones. The performance of the network separating the signals into ICs was measured by an individual performance index (IPI) of the permutation error Ei for ith output: (17) where Pij were elements ofthe permutation matrix P = WA. IPI was close to zero when the corresponding output was closely independent to other components. 5.1 Ordering ICs in Signal Separation Three independent random signals distributed in Gaussian, sub- and super-Gaussian manner were simulated. Their statistical configurations were similar to those used in [1]. These source signals c were mixed with a random matrix to derive inputs to the network. The networks were trained to obtain the 3 x 3 demixing matrix using the algorithm of kurtosis-constraint ClCA eq.(lO) and (12) to separate three independent components in complete lCA manner. The source components, mixed input signals and the resulted output waveforms are shown in figure 1 (a), (b) and (c), respectively. The network separated and C, :~ iM~~(~~~I\'~1~r4,~~~W X, ~FW~~~~~N'~~~~l\i' c, :~ I~~ ~l~IIU~IMI~~ijf~,~1 X2 ' : ~~~\W,~~~W\~~\j~ s :~~j X3 l~~~~!~I~~~;~~ " 50 ,,, 100 .'" .,., "" "'" "" e, "" """00 I!a .," .... "" :!III "" ..., "'" Samples In Time Senes Samples In Time Senes (a) (b) '''' , .. ..., '5O 30. "" ... Samples In Time Senes (c) Figure 1: Result of extraction of one super-Gaussian, one Gaussian and one subGaussian signals in the kurtosis descent order. Normalized kurtosis measurements are K.4(Yl) = 32.82, K.4(Y2) = -0.02 and K.4(Y3) = -1.27. (a) Source components, (b) input mixtures and (c) resulted components. sorted the output components in a decreasing manner of kurtosis values, where the component Yl had kurtosis 32.82 (> 0, super-Gaussian), Y2 is 0.02 (~ 0, Gaussian) and Y3 is -1.27 « 0, sub-Gaussian). The final performance index value of 0.28 and output components' average SNR value of 15dB show all three independent components well separated too. 5.2 Demixing Matrix Normalization Three deterministic signals and one Gaussian noise were simulated in this experiment. All signals were independently generated with unit variance and mixed with a random mixing matrix. All input mixtures were preprocessed by a whitening process to have zero mean and unit variance. The signals were separated using both unconstrained lCA and constrained lCA as given by eq.(5) and (16) respectively. Table 1 compares their resulted demixing matrix, row norms, variances of separated components and SNR values. The dilation effect can be seen from the difference y Demixing Matrix W Norms Variance SNR Yl 0.90 0.08 -0.12 -0.82 1.23 1.50 4.55 uncons. Y2 -0.06 1.11 -0.07 0.07 1.11 1.24 10.88 lCA Y3 0.07 0.07 1.47 -0.09 1.47 2.17 21.58 Y4 1.04 0.08 0.04 1.16 1.56 2.43 16.60 Yl 0.65 0.43 -0.02 -0.61 0.99 0.98 4.95 cons. Y2 -0.37 0.91 0.05 0.20 1.01 1.02 13.94 lCA Y3 0.01 -0.04 1.00 -0.04 1.00 1.00 25.04 Y4 0.65 0.07 0.02 0.76 1.00 1.00 22.56 Table 1: Comparison of the demixing matrix elements, row norms, output variances and resulted components' SNR values in lCA, and ClCA with normalization. among components' variances caused by the non-normalized demixing matrix in unconstrained ICA. The CICA algorithm with normalization constraint normalized rows of the demixing matrix and separated the components with variances remained at unit. Therefore, the source signals are exactly recovered without any dilation. The increment of separated components' SNR values using CICA also can be seen in the table. Their source components, input mixture, separated components using normalization are given in figure 2. It shows that the resulted signals from CICA are exactly match with the source signals in the sense of waveforms and amplitudes. (a) (b) (c) Figure 2: (a) Four source deterministic components with unit variances, (b) mixture inputs and (c) resulted components through normalized demixing channel W. 6 Conclusion We present an approach of constrained ICA using Lagrange multiplier methods to eliminate the indeterminacy of permutation and dilation which are present in classical ICA. Our results provide a technique for systematically enhancing the ICA's usability and performance using the constraints not restricted to the conditions treated in this paper. More useful constraints can be considered in similar manners to further improve the outputs of ICA in other practical applications. Simulation results demonstrate the accuracy and the usefulness of the proposed algorithms. References [1] Jagath C. Rajapakse and Wei Lu. Unified approach to independent component networks. In Second International ICSC Symposium on NEURAL COMPUTATION (NC'2000), 2000. [2] A. Bell and T. Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neurocomputing, 7:1129-1159,1995. [3] S. Amari, A. Chchocki, and H. Yang. A new learning algorithm for blind signal separation. In Advances in Neural Information Processing Systems 8, 1996. [4] T-W. Lee, M. Girolami, and T. Sejnowski. Independent component analysis using an extended informax algorithm for mixed sub-gaussian and super-gaussian sources. Neural Computation, 11(2):409-433, 1999. [5] P. Comon. Independent component analysis: A new concept? Signal Processing, 36:287- 314, 1994. [6] Dimitri P. Bertsekas. Constrained optimization and Lagrange multiplier methods. New York: Academic Press, 1982. [7] A. Hyvii.rinen and Erkki Oja. Simple neuron models for independent component analysis. Neural Systems, 7(6):671- 687, December 1996.
2000
78
1,881
A Gradient-Based Boosting Algorithm for Regression Problems Richard S. Zemel Toniann Pitassi Department of Computer Science University of Toronto Abstract In adaptive boosting, several weak learners trained sequentially are combined to boost the overall algorithm performance. Recently adaptive boosting methods for classification problems have been derived as gradient descent algorithms. This formulation justifies key elements and parameters in the methods, all chosen to optimize a single common objective function. We propose an analogous formulation for adaptive boosting of regression problems, utilizing a novel objective function that leads to a simple boosting algorithm. We prove that this method reduces training error, and compare its performance to other regression methods. The aim of boosting algorithms is to "boost" the small advantage that a hypothesis produced by a weak learner can achieve over random guessing, by using the weak learning procedure several times on a sequence of carefully constructed distributions. Boosting methods, notably AdaBoost (Freund & Schapire, 1997), are simple yet powerful algorithms that are easy to implement and yield excellent results in practice. Two crucial elements of boosting algorithms are the way in which a new distribution is constructed for the learning procedure to produce the next hypothesis in the sequence, and the way in which hypotheses are combined to produce a highly accurate output. Both of these involve a set of parameters, whose values appeared to be determined in an ad hoc maImer. Recently boosting algorithms have been derived as gradient descent algorithms (Breiman, 1997; Schapire & Singer, 1998; Friedman et al., 1999; Mason et al., 1999). These formulations justify the parameter values as all serving to optimize a single common objective function. These optimization formulations of boosting originally developed for classification problems have recently been applied to regression problems. However, key properties of these regression boosting methods deviate significantly from the classification boosting approach. We propose a new boosting algorithm for regression problems, also derived from a central objective function, which retains these properties. In this paper, we describe the original boosting algorithm and summarize boosting methods for regression. We present our method and provide a simple proof that elucidates conditions under which convergence on training error can be guaranteed. We propose a probabilistic framework that clarifies the relationship between various optimization-based boosting methods. Finally, we summarize empirical comparisons between our method and others on some standard problems. 1 A Brief Summary of Boosting Methods Adaptive boosting methods are simple modular algorithms that operate as follows. Let 9 : X -t Y be the function to be learned, where the label set Y is finite, typically binary-valued. The algorithm uses a learning procedure, which has access to n training examples, {(Xl, Y1), ... , (xn, Yn)}, drawn randomly from X x Yaccording to distribution D; it outputs a hypothesis I : X -t Y, whose error is the expected value of a loss function on I(x) , g(x), where X is chosen according to D. Given f, cl > 0 and access to random examples, a strong learning procedure outputs with probability 1 - cl a hypothesis with error at most f, with running time polynomial in 1/ f, 1/ cl and the number of examples. A weak learning procedure satisfies the same conditions, but where f need only be better than random guessing. Schapire (1990) showed that any weak learning procedure, denoted WeakLeam, can be efficiently transformed ("boosted") into a strong learning procedure. The AdaBoost algorithm achieves this by calling WeakLeam multiple times, in a sequence of T stages, each time presenting it with a different distribution over a fixed training set and finally combining all of the hypotheses. The algorithm maintains a weight w: for each training example i at stage i, and a distribution D t is computed by normalizing these weights. The algorithm loops through these steps: 1. At stagei, the distribution D t is given to WeakLeam, which generates a hypothesis It- The error rate ft of It w.r.t. D t is: ft = 2::i f,(x');t'y ' wU 2::7=1 w~ 2. The new training distribution is obtained from the new weights: W;+l w: * (ft/(l - ft))Hf,(x')-y'l After T stages, a test example X will be classified by a combined weighted-majority hypothesis: y = sgn(2::;=1 cdt (x)). Each combination coefficient Ct = log( (1- fd/ fd takes into account the accuracy of hypothesis It with respect to its distribution. The optimization approach derives these equations as all minimizing a common objective function J, the expected error of the combined hypotheses, estimated from the training set. The new hypothesis is the step in function space in the direction of steepest descent of this objective. For example, if J ~ 2::7=1 exp(- 2::t yicdt(xi)), then the cost after T rounds is the cost after T 1 rounds times the cost of hypothesis IT : n T-1 J (T) ~ L exp (- L yi cdt (xi) ) exp ( _yi cT IT (xi) ) i=l t=l so training IT to minimize J(T) amounts to minimizing the cost on a weighted training distribution. Similarly, the training distribution is formed by normalizing updated weights: w:+1 = w: * exp(-yicdt(xi )) = w; * exp(s~cdwhere s: = 1 if It (xi) i- yi, else s~ = -1. Note that because the objective function J is multiplicative in the costs of the hypotheses, a key property follows: The objective for each hypothesis is formed simply by re-weighting the training distribution. This boosting algorithm applies to binary classification problems, but it does not readily generalize to regression problems. Intuitively, regression problems present special difficulties because hypotheses may not just be right or wrong, but can be a little wrong or very wrong. Recently a spate of clever optimization-based boosting methods have been proposed for regression (Duffy & Helmbold, 2000; Friedman, 1999; Karakoulas & Shawe-Taylor, 1999; R~itsch et al., 2000). While these methods involve diverse objectives and optimization approaches, they are alike in that new hypotheses are formed not by simply changing the example weights, but instead by modifying the target values. As such they can be viewed as forms of forward stage-wise additive models (Hastie & Tibshirani, 1990), which produce hypotheses sequentially to reduce residual error. We study a simple example of this approach, in which hypothesis T is trained not to produce the target output yi on a given case i, but instead to fit the current residual, r~, where r~ = yi - L,;;11 ctft(x). Note that this approach develops a series of hypotheses all based on optimizing a common objective, but it deviates from standard boosting in that the distribution of examples is not used to control the generation of hypotheses, and each hypothesis is not trained to learn the same function. 2 An Objective Function for Boosting Regression Problems We derive a boosting algorithm for regression from a different objective function. This algorithm is similar to the original classification boosting method in that the objective is multiplicative in the hypotheses' costs, which means that the target outputs are not altered after each stage, but rather the objective for each hypothesis is formed simply by re-weighting the training distribution. The objective function is: 1 n (T 1) [T " J h = -;;; {; J1 c;'i exp {; Ct(ft(x') - y' )2 (1) Here, training hypothesis T to minimize JT, the cost after T stages, amounts to minimizing the exponentiated squared error of a weighted training distribution: n L w~ (c;~ exp [cT(h(xi) - yi)2J) ; =1 We update each weight by multiplying by its respective error, and form the training distribution for the next hypothesis by normalizing these updated weights. In the standard AdaBoost algorithm, the combination coefficient Ct can be analytically determined by solving %I; = 0 for Ct. Unfortunately, one cannot analytically determine the combination coefficient Ct in our algorithm, but a simple line search can be used to find value of Ct that minimizes the cost Jt . We limit Ct to be between 0 and 1. Finally, optimizing J with respect to y produces a simple linear combination rule for the estimate: fj = L,t Ct It (x) / L,t Ct· We introduce a constant r as a threshold used to demarcate correct from incorrect responses. This threshold is the single parameter of this algorithm that must be chosen in a problem-dependent manner. It is used to judge when the performance of a new hypothesis warrants its inclusion: ft = L,i p~ exp[(ft(xi ) - yi )2 - r] < 1. The algorithm can be summarized as follows: New Boosting Algorithm 1. Input: • training set examples (Xl, yI) , .... (Xn, Yn ) with Y E ~; • WeakLeam: learning procedure produces a hypothesis h(x) whose accuracy on the training set is judged according to J 2. Choose initial distribution P1 (xi) = P~ = w~ = ~ 3. Iterate: • Call WeakLearn - minimize Jt with distribution Pt • Accept iff Et = ~i P~ exp[(ft(xi) - yi)2 - r] < 1 • Set a ~ Ct ~ 1 to minimize Jt (using line search) • Update training distribution n w;+l/L W{+l j=l 4. Estimate output y on input x: Y = L cdt(x)/ L Ct t t 3 Proof of Convergence Theorem: Assume that for all t ~ T, hypothesis t makes error Et on its distribution. If the combined output y is considered to be in error iff (Y - y)2 > r, then the output of the boosting algorithm (after T stages) will have error at most E, where T T E = P[(yi - yi)2 > r] ~ II Et exp[r(T - L cd] Proof: We follow the approach used in the AdaBoost proof (Freund & Schapire, 1997). We show that the sum of the weights at stage T is bounded above by a constant times the product of the Et'S, while at the same time, for each input i that is incorrect, its corresponding weight w~ at stage T is significant. T < II c~1/ 2 exp(r)Et t=l The inequality holds because a ~ Ct ~ 1. We now compute the new weights: L Ct(ft(xi) - yi)2 = [L ct][Var(fi) + (yi - yi)2] ~ [L Ct][(yi - yi )2] t t t where yi = ~t cth(xi)/ ~t Ct and Var(fi) = ~t ct(h(xi) - yi)2 / ~t Ct. Thus, T T T T W~ +l = (II C;1/2) exp(L Ct(ft(xi) - yi)2) ~ (II C; 1/2) exp([L Ct][(yi _ yi)2]) t=l Now consider an example input k such that the final answer is an error. ll1en, by definition, (yk - yk)2 > T => W~+l 2:: (TIt c;1/2) exp(T L:t cd. If f is the total error rate of the combination output, then: T T L w~+1 2:: L w~+1 2:: f(II C;1/2) exp(T L Ct) k:k error T T f:::; (L w~+l)(II ci/2) exp[T(T - L Ct)] :::; II ft exp[T(T - L Ct)] i t=l t t=l t Note that as in the binary AdaBoost theorem, there are no assumptions made here about ft, the error rate of individual hypotheses. If all ft = ,6. < I, then f < ,6.T exp[T(T - L:t Ct)], which is exponentially decreasing as long as Ct -+ 1. 4 Comparing the Objectives We can compare the objectives by adopting a probabilistic framework. We associate a probability distribution with the output of each hypothesis on input x, and combine them to form a consensus model M by multiplying the distributions: g(ylx, M) == TIt pt(ylx, (1d,where (1t are parameters specific to hypothesis t. If each hypothesis t produces a single output ft (x) and has confidence Ct assigned to it, then pt(ylx, (1t) can be considered a Gaussian with mean It (x) and variance 1/ Ct 9 (y I x, M) = k [If ci /2] exp [- ~ Ct (y - ft ( x ) ) 2] Model parameters can be tuned to maximize g(y*lx, M), where y* is the target for x; our objective (Eq. 1) is the expected value ofthe reciprocal of g(y* lx, M). An alternative objective can be derived by first normalizing g(ylx, M): ( I M) = g(ylx, M) = TIt pt(ylx, (1d pyx, - J, - J, TI ( y,g(ylx,M) y' tPt y'lx, (1t)dy' TIUs probability model underlies the product-of-experts model (Hinton, 2000) and the logarithmic opinion pool (Bordley, 1982).If we again assume pt(ylx, (1t) ~ N(ft(x), C;l)), thenp(ylx, M) is a Gaussian, with mean f(x) = L:£tft(X) and int Ct verse variance c = L:t Ct. The objective for this model is: JR=-logp(y* lx,M)=c[y*-f(x)f -~logc (2) TIUs objective corresponds to a type of residual-fitting algorithm. If r(x) [y* - f (x) ] , and {Ct} for t < T are assumed frozen, then training iT to minimize J R is achieved by using r (x) as a target. These objectives can be further compared w.r.t. a bias-variance decomposition (Geman et aI., 1992; Heskes, 1998). The main term in our objective can be re-expressed: L Ct [yO - ft(X)]2 = L Ct [yO - f(x)] 2 + L Ct [ft(x) - f( x) ] 2 = bias+variance t t t Meanwhile, the main term of JR corresponds to the bias term. Hence a new hypothesis can minimize JR by having low error (ft (x) = y*), or with a deviant (ambiguous) response (ft(x) -=F f(x) (Krogh & Vedelsby, 1995). Thus our objective attempts to minimize the average error of the models, while the residual-fitting objective minimizes the error of the average model. 0 .065 0.35 0.06 0 .055 0.05 0.3 0.045 0.04 0.035 2 4 8 10 o.25 '------~ 2---4c---~---8~-----c 10 Figure 1: Generalization results for our gradient-based boosting algorithm, compared to the residual-fitting and mixture-of-experts algorithms. Left: Test problem F1; Right: Boston housing data. Normalized mean-squared error is plotted against the number of stages of boosting (or number of experts for the mixture-of-experts). 5 Empirical Tests of Algorithm We report results comparing the performance of our new algorithm with two other algorithms. The first is a residual-fitting algorithm based on the J R objective (Eq. 2), but the coefficients are not normalized. The second algorithm is a version of the mixture-of-experts algorithm aacobs et al., 1991). Here the hypotheses (or experts) are trained simultaneously. In the standard mixture-of-experts the combination coefficients depend on the input; to make this model comparable to the others, we allowed each expert one input-independent, adaptable coefficient. This algorithm provides a good alternative to the greedy stage-wise methods, in that the experts are trained simultaneously to collectively fit the data. We evaluate these algorithms on two problems. The first is the nonlinear prediction problem F1 (Friedman, 1991), which has 10 independent input variables uniform in [0 , 1]: y = 10 sin( 7rX1X2) + 20(X3 - .5)2 + 10x4 + 5X5 + n where n is a random variable drawn from a mean-zero, unit-variance normal distribution. In this problem, only five input variables (Xl to X5) have predictive value. We rescaled the target values y to be in [0, 3]. We used 400 training examples, and 100 validation and test examples. The second test problem is the standard Boston Housing problem Here there are 506 examples and twelve continuous input variables. We scaled the input variables to be in [0,1], and the outputs to be in [0, 5]. We used 400 of the examples for training, 50 for validation, and the remainder to test generalization. We used neural networks as the hypotheses and back-propagation as the learning procedure to train them. Each network had a layer of tanhO units between the input units and a single linear output. For each algorithm, we used early stopping with a validation set in order to reduce over-fitting in the hypotheses. One finding was that the other algorithms out-performed ours when the hypotheses were simple: when the weak learners had only one or two hidden nodes, the residual-fitting algorithm reduced test error. With more hidden nodes the relative performance of our algorithm improved. Figure 1 shows average results for threehidden-unit networks over 20 runs of each algorithm on the two problems, with examples randomly assigned to the three sets on each run. The results were consistent for different values of T in our algorithm; here T = 0.1. Overall, the residual-fitting algorithm exhibited more over-fitting than our method. Over-fitting in these approaches may be tempered: a regularization technique known as shrinkage, which scales combination coefficients by a fractional parameter, has been found to improve generalization in gradient boosting applications to classification (Friedman, 1999). Finally, the mixture-of-experts algorithm generally out-performed the sequential training algorithm. A drawback of this method is the need to specify the number of hypotheses in advance; however, given that number, simultaneous training is likely less prone to local minima than the sequential approaches. 6 Conclusion We have proposed a new boosting algorithm for regression problems. Like several recent boosting methods for regression, the parameters and updates can be derived from a single common objective. Unlike these methods, our algorithm forms new hypotheses by simply modifying the distribution over training examples. Preliminary empirical comparisons have suggested that our method will not perform as well as a residual-fitting approach for simple hypotheses, but it works well for more complex ones, and it seems less prone to over-fitting. The lack of over-fitting in our method can be traced to the inherent bias-variance tradeoff, as new hypotheses are forced to resemble existing ones if they cannot improve the combined estimate. We are exploring an extension that brings our method closer to the full mixture-ofexperts. The combination coefficients can be input-dependent: a learner returns not only ft(x i ) but also kt(xi ) E [0,1], a measure of confidence in its prediction. This elaboration makes the weak learning task harder, but may extend the applicability of the algorithm: letting each learner focus on a subset of its weighted training distribution permits a divide-and-conquer approach to function approximation. References [1] Bordley, R. (1982). A multiplicative formula for aggregating probability assessments. Managment Science, 28, 1137-1148. [2] Breiman, L. (1997). Prediction games and arcing classifiers. TR 504. Statistics Dept., UC Berkeley. [3] Duffy, N. & Helmbold, D. (2000). Leveraging for regression. In Proceedings of COLT, 13. [4] Freund, Y. & Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Camp. and System Sci., 55, 119-139. [5] Friedman, J. H. (1999). Greedy function approximation: A gradient boosting machine. TR, Dept. of Statistics, Stanford University. [6] Friedman, J. H., Hastie, T., & Tibshirani, R. (1999). Additive logistic regression: A statistical view of boosting. Annals of Statistics, To appear. [7] Geman, S., Bienenstock, E., & Doursat, R. (1992). Neural networks and the bias/variance dilemma. Neural Computation, 4,1-58. [8] Hastie, T & TIbshirani, R. (1990). Generalized Additive Models. Chapman and Hall. [9] Heskes, T (1998). Bias-variance decompositions for likelihood-based estimators. Neural Computation, 10, 1425-1433. [10] Hinton, G. E. (2000). Training products of experts by minimizing contrastive divergence. GCNUTR 2000-004. Gatsby Computational Neuroscience Unit, University College London. [11] Jacobs, R. A., Jordan, M. I., Nowlan, S. J., & Hinton, G. E. (1991). Adaptive mixtures of local experts. Neural Computation, 3,79-87. [12] Karakoulas, G., & Shawe-Taylor, J. (1999). Towards a strategy for boosting regressors. In Advances in Large Margin Classifiers, Smola, Bartlett, Schdlkopf & Schuurmans (Eds.). [13] Krogh, A. & Vedelsby, J. (1995). Neural network ensembles, cross-validation, and active learning. InJ\TIps 7. [14] Mason, L., Baxter, J., Bartlett, P., & Frean, M. (1999). Boosting algorithms as gradient descent in function space. In NIPS 11. [15] Riitsch, G., Mika, S. Onoda, T, Lemm, S. & Miiller, K.-R. (2000). Barrier boosting. In Proceedings of COLT,13. [16] Schapire, R. E. (1990). The strength of weak leamability. Machine Learning, 5, 197-227. [17] Schapire, R. E. & Singer, Y. (1998). Improved boosting algorithms using confidence-rated precitions. In Proceedings of COLT, 11.
2000
79
1,882
Active Learning for Parameter Estimation in Bayesian Networks Simon Tong Computer Science Department Stanford University simon. tong@cs.stanford.edu Daphne Koller Computer Science Department Stanford University koller@cs.stanford.edu Abstract Bayesian networks are graphical representations of probability distributions. In virtually all of the work on learning these networks, the assumption is that we are presented with a data set consisting of randomly generated instances from the underlying distribution. In many situations, however, we also have the option of active learning, where we have the possibility of guiding the sampling process by querying for certain types of samples. This paper addresses the problem of estimating the parameters of Bayesian networks in an active learning setting. We provide a theoretical framework for this problem, and an algorithm that chooses which active learning queries to generate based on the model learned so far. We present experimental results showing that our active learning algorithm can significantly reduce the need for training data in many situations. 1 Introduction In many machine learning applications, the most time-consuming and costly task is the collection of a sufficiently large data set. Thus, it is important to find ways to minimize the number of instances required. One possible method for reducing the number of instances is to choose better instances from which to learn. Almost universally, the machine learning literature assumes that we are given a set of instances chosen randomly from the underlying distribution. In this paper, we assume that the learner has the ability to guide the instances it gets, selecting instances that are more likely to lead to more accurate models. This approach is called active learning. The possibility of active learning can arise naturally in a variety of domains, in several variants. In selective active learning, we have the ability of explicitly asking for an example of a certain "type"; i.e., we can ask for an full instance where some of the attributes take on requested values. For example, if our domain involves webpages, the learner might be able to ask a human teacher for examples of homepages of graduate students in a Computer Science department. A variant of selective active learning is pool-based active learning, where the learner has access to a large pool of instances, about which it knows only the value of certain attributes. It can then ask for instances in this pool for which these known attributes take on certain values. For example, one could redesign the U.S. census to have everyone fill out only the short form; the active learner could then select among the respondents for those that should fill out the more detailed long form. Another example is a cancer study in which we have a list of people's ages and whether they smoke, and we can ask a subset of these people to undergo a thorough examination. In such active learning settings, we need a mechanism that tells us which instances to select. This problem has been explored in the context of supervised learning [1, 2, 7, 9]. In this paper, we consider its application in the unsupervised learning task of density estimation. We present a formal framework for active learning in Bayesian networks (BNs). We assume that the graphical structure of the BN is fixed, and focus on the task of parameter estimation. We define a notion of model accuracy, and provide an algorithm that selects queries in a greedy way, designed to improve model accuracy as much as possible. At first sight, the applicability of active learning to density estimation is unclear. Given that we are not simply sampling, it is initially not clear that an active learning algorithm even learns the correct density. In fact we can actually show that our algorithm is consistent, i.e., it converges to the right density at the limit. Furthermore, it is not clear that active learning is necessarily beneficial in this setting. After all, if we are trying to estimate a distribution, then random samples from that distribution would seem the best source. Surprisingly, we provide empirical evidence showing that, in a range of interesting circumstances, our approach learns from significantly fewer instances than random sampling. 2 Learning Bayesian Networks Let X = {Xl, ... ,Xn } be a set of random variables, with each variable Xi taking values in some finite domain Dom[ XiJ. A Bayesian network over X is a pair (9, 0) that represents a distribution over the joint space of X. Q is a directed acyclic graph, whose nodes correspond to the random variables in X and whose structure encodes conditional independence properties about the joint distribution. We use U i to denote the set of parents of Xi. 0 is a set of parameters which quantify the network by specifying the conditional probability distributions (CPDs) P(Xi I U i ). We assume that the CPD of each node consists of a separate multinomial distribution over Dom[XiJ for each instantiation u of the parents U i . Hence, we have a parameter OXi;lu for each Xij E Dom[XiJ; we use OXilu to represent the vector of parameters associated with the multinomial P(Xi I u). Our focus is on the parameter estimation task: we are given the network structure Q, and our goal is to use data to estimate the network parameters O. We will use Bayesian parameter estimation, keeping a density over possible parameter values. As usual [5], we make the assumption of parameter independence, which allows us to represent the joint distribution p( 0) as a set of independent distributions, one for each multinomial 0 Xi I u. For multinomials, the conjugate prior is a Dirichlet distribution [4], which is parameterized by hyperparameters aj E IR+, with a* = l:j aj. Intuitively, aj represents the number of "imaginary samples" observed prior to observing any data. In particular, if X is distributed multinomial with parameters 0 = (01, ... , Or), and p(O) is Dirichlet, then the probability that our next observation is Xj is aj/a*. If we obtain a new instance X = Xj sampled from this distribution, then our posterior distribution p(O) is also distributed Dirichlet with hyperparameters (al, ... ,aj + 1, ... ,ar ). In a BN with the parameter independence assumption, we have a Dirichlet distribution for every multinomial distribution OXi lu. Given a distribution p(O), we use a Xi; lu to denote the hyperparameter corresponding to the parameter OXi; lu. 3 Active Learning Assume we start out with a network structure Q and a prior distribution p( 0) over the parameters of Q. In a standard machine learning framework, data instances are independently, randomly sampled from some underlying distribution. In an active learning setting, we have the ability to request certain types of instances. We formalize this idea by assuming that some subset C of the variables are controllable. The learner can select a subset of variables Q C C and a particular instantiation q to Q. The request Q = q is called a query. The result of such a query is a randomly sampled instance x conditioned on Q = q. A (myopic) active learner e is a querying function that takes Q and p(O), and selects a query Q = q. It takes the resulting instance x, and uses it to update its distribution p(O) to obtain a posterior p'(O). It then repeats the process, using p' for p. We note that p( 0) summarizes all the relevant aspects of the data seen so far, so that we do not need to maintain the history of previous instances. To fully specify the algorithm, we need to address two issues: we need to describe how our parameter distribution is updated given that x is not a random sample, and we need to construct a mechanism for selecting the next query based on p. To answer the first issue assume for simplicity that our query is Q = q for a single node Q. First, it is clear that we cannot use the resulting instance x to update the parameters of the node Q itself. However, we also have a more subtle problem. Consider a parent U of Q. Although x does give us information about the distribution of U, it is not information that we can conveniently use. Intuitively, P(U I Q = q) is sampled from a distribution specified by a complex formula involving multiple parameters. We avoid this problem simply by ignoring the information provided by x on nodes that are "upstream" of Q. More generally, we define a variable Y to be updateable in the context of a selective query Q if it is not in Q or an ancestor of a node in Q. Our update rule is now very simple. Given a prior distribution p( 9) and an instance x from a query Q = q, we do standard Bayesian updating, as in the case of randomly sampled instances, but we update only the Dirichlet distributions of update able nodes. We use p( 9 t Q = q, x) to denote the distribution pi (9) obtained from this algorithm; this can be read as "the density of 9 after asking query q and obtaining the response x". Our second task is to construct an algorithm for deciding on our next query given our current distribution p. The key step in our approach is the definition of a measure for the quality of our learned model. This allows us to evaluate the extent to which various instances would improve the quality of our model, thereby providing us with an approach for selecting the next query to perform. Our formulation is based on the framework of Bayesian point estimation. In the Bayesian learning framework, we maintain a distribution p( 9) over all of the model parameters. However, when we are asked to reason using the model, we typically "collapse" this distribution over parameters, generate a single representative model iJ, and answer questions relative to that. If we choose to use iJ, whereas the "true" model is 9*, we incur some loss Loss(iJ II 9*). Our goal is to minimize this loss. Of course, we do not have access to 9*. However, our posterior distribution p( 9) represents our "optimal" beliefs about the different possible values of 9*, given our prior knowledge and the evidence. Therefore, we can define the risk of a particular iJ with respect to pas: Ee~p(e) [Loss (6 II iJ)] = 10 Loss (9 II iJ)p(9) d9. (1) We then define the Bayesian point estimate to be the value of iJ that minimizes the risk. We shall only be considering using the Bayesian point estimate, thus we define the risk of a density p, Risk(p( 9)), to be the risk of the optimal iJ with respect to p. The risk of our density p(9) is our measure for the quality of our current state of knowledge, as represented by p(9). In a greedy scheme, our goal is to obtain an instance x such that the risk of the pi obtained by updating p with x is lowest. Of course, we do not know exactly which x we are going to get. We know only that it will be sampled from a distribution induced by our query. Our expected posterior risk is therefore: ExPRisk(p(9) I Q = q) = Ee~p(e)Ex~Pe(XIQ=q)Risk(p(9 t Q = q, x)). (2) This definition leads immediately to the following simple algorithm: For each candidate query Q = q, we evaluate the expected posterior risk, and then select the query for which it is lowest. 4 Active Learning Algorithm To obtain a concrete algorithm from the active learning framework shown in the previous section, we must pick a loss function. There are many possible choices, but perhaps the best justified is the relative entropy or Kullback-Leibler divergence (KL-divergence) [3] : KL(9 II iJ) = ~x Pe(x) In ;:~:~. The KL-divergence has several independentjustifications, and a variety of properties that make it particularly suitable as a measure of distance between distributions. We therefore proceed in this paper using KL-divergence as our loss function. (An analogous analysis can be carried through for another very natural loss function: negative loglikelihood of future data in the case of multinomial CPDs with Dirichlet densities over the parameters this results in an identical final algorithm.) We now want to find an efficient approach to computing the risk. Two properties of KL-divergence tum out to be crucial. The first is that the value 0 that minimizes the risk relative to p is the mean value of the parameters, Ee~p(9) [0]. For a Bayesian network with independent Dirichlet distributions over the parameters, this expression reduces to Ox _ -Iu = O!Zij Iu , the standard (Bayesian) approach used for collapsing a distribution over "3 O'Zli.lu BN models into a single model. The second observation is that, for BNs, KL-divergence decomposes with the graphical structure of the network: KL(OIIO') = L KL(P9 (Xi I Ui) II P9,(Xi lUi)), (3) i where KL(P(Xi I U i ) II P'(Xi I U i )) is the conditional KL-divergence and is given by :Eu P(u)KL(P(Xi I u) II P'(Xi I u)). With these two facts, we can prove the following: Theorem 4.1 Let f(a) be the Gamma junction, \[I" (a) be the digamma function f'(a)/f(a), and H be the entropy junction. Define 8(al,oo.,ar ) = :E;=l [~ (\[I"(aj + 1) - \[I"(a* + 1)) + H (~, ... , ~)]. Then the risk decomposes as: Risk(p(O)) = L L Pij(u)8(aXillu,oo.,aXirilu)' (4) i uEDom[Ui] Eq. (4) gives us a concrete expression for evaluating the risk of p(O). However, to evaluate a potential query, we also need its expected posterior risk. Recall that this is the expectation, over all possible answers to the query, of the risk of the posterior distribution p'. In other words, it is an average over an exponentially large set of possibilities. To understand how we can evaluate this expression efficiently, we first consider a much simpler case. Consider a BN where we have only one child node X and its parents U, i.e., the only edges are from the nodes U to X . We also restrict attention to queries where we control all and only the parents U. In this case, a query q is an instantiation to U, and the possible outcomes to the query are the possible values of the variable X. The expected posterior risk contains a term for each variable Xi and each instantiation to its parents. In particular, it contains a term for each of the parent variables U. However, as these variables are not updateable, their hyperparameters remain the same following any query q. Hence, their contribution to the risk is the same in every p( 0 t U = q, x), and in our prior p( 0). Thus, we can ignore the terms corresponding to the parents, and focus on the terms associated with the conditional distribution P(X I U). Hence, we have: Riskx(p(O)) = LPij(u)8(aXllu,oo.,axrlu) (5) u ExPRiskx(p(O) I U = q) j u where a~jlu is the hyperparameter in p(O t Q = q, Xj) . Rather than evaluating the expected posterior risk directly, we will evaluate the reduction in risk obtained by asking a query U = q: ~(X I q) = Risk(P(O)) - ExPRisk(P(O) I q) = Riskx(P(O)) - ExPRiskx(P(O) I q) Our first key observatIOn relies on the fact that the variables tl are not updateable for this query, so that their hyperparameters do not change. Hence, Pij (u) and Pij' (u) are the same. The second observation is that the hyperparameters corresponding to an instantiation u are the same in p and p' except for u = q. Hence, terms cancel and the expression simplifies to: Pij (q) (8(aX1Iq, 00 • , aXrlq) - :Ej Pij(Xj I q)8(a~1Iq' 00 • , a~ r lq)) . By taking advantage of certain functional properties of \[1", we finally obtain: ~(X I q) = Pij(q) (H (:Zllq, ... , :zrlq) -" Pij(Xj I q)H (:~llq, 00. , :~rlq)) (7) z.lq ZI.lq L..J ZI.lq ZI . lq j If we now select our query q so as to maximize the difference between our current risk and the expected posterior risk, we get a very natural behavior: We will select the query q that leads to the greatest reduction in the entropy of X given its parents. It is also here that we can gain an insight as to where active learning has an edge over random sampling. Consider one situation in which ql which is 100 times less likely than ~; ql will lead us to update a parameter whose current density is Dirichlet(l, 1), whereas q2 will lead us to update a parameter whose current density is Dirichlet(100, 100). However, according to ~, updating the former is worth more than the latter. In other words, if we are confident about commonly occurring situations, it is worth more to ask about the rare cases. We now generalize this derivation to the case of an arbitrary BN and an arbitrary query. Here, our average over possible query answers encompasses exponentially many terms. Fortunately, we can utilize the structure of the BN to avoid an exhaustive enumeration. Theorem 4.2 For an arbitrary BN and an arbitrary query Q = q, the expected KL posterior risk decomposes as: ExPRisk(P(O) 1 Q = q) = L: L: Pij(u 1 Q = q)ExPRiskxi (P(O) 1 Vi = u). i uEDom[u;] In other words, the expected posterior risk is a weighted sum of expected posterior risks for conditional distributions of individual nodes Xi, where for each node we consider "queries" that are complete instantiations to the parents Vi of Xi . We now have similar decompositions for the risk and the expected posterior risk. The obvious next step is to consider the difference between them, and then simplify it as we did for the case of a single variable. Unfortunately, in the case of general BNs, we can no longer exploit one of our main simplifying assumptions. Recall that, in the expression for the risk (Eq. (5», the term involving Xi and u is weighted by Pij (u). In the expected posterior risk, the weight is Pij' (u). In the case of a single node and a full parent query, the hyperparameters of the parents could not change, so these two weights were necessarily the same. In the more general setting, an instantiation x can change hyperparameters all through the network, leading to different weights. However, we believe that a single data instance will not usually lead to a dramatic change in the distributions. Hence, these weights are likely to be quite close. To simplify the formula (and the associated computation), we therefore choose to approximate the posterior probability Pij' (u) using the prior probability Pij(u). Under this assumption, we can use the same simplification as we did in the single node case. Assuming that this approximation is a good one, we have that: ~(X 1 q) = Risk(p(O)) - ExPRisk(p(O) 1 q) ~ L: L: Pij(u 1 q)~(Xi 1 u), i uEDom[Ui] (8) where ~(Xi 1 u) is as defined in Eq. (7). Notice that we actually only need to sum over the update able XiS since ~(Xi 1 u) will be zero for all non-updateable XiS. The above analysis provides us with an efficient implementation of our general active learning scheme. We simply choose a set of variables in the Bayesian network that we wish to control, and for each instantiation of the controllable variables we compute the expected change in risk given by Eq. (8). We then ask the query with the greatest expected change and update the parameters of the updateable nodes. We now consider the computational complexity of the algorithm. It turns out that, for each potential query, all of the desired quantities can be obtained via two inference passes using a standard join tree algorithm [6]. Thus, the run time complexity of the algorithm is: 0(1 QI . cost of BN join tree inference), where Q is the set of candidate queries. Our algorithm (approximately) finds the query that reduces the expected risk the most. We can show that our specific querying scheme (including the approximation) is consistent. As we mentioned before, this statement is non-trivial and depends heavily on the specific querying algorithm. , • I" o , , K ;; ;; ,,~" ~---;;;-~--c,,:c-~,C,;-" ~---;;;-~---!. "~~"-----c~--~,~""-----c~-----! NOO1WoIQu ....... NconberofQ...ene& (a) (b) (c) Figure 1: (a) Alann network with three controllable nodes. (b) Asia network with two controllable nodes. (c) Cancer network with one controllable node. The axes are zoomed for resolution. Theorem 4.3 Let U be the set of nodes which are updateable for at least one candidate query at each querying step. Assuming that the underlying true distribution is not deterministic, then our querying algorithm produces consistent estimates for the CPD parameters of every member ofU. 5 Experimental Results We performed experiments on three commonly used networks: Alarm, Asia and Cancer. Alarm has 37 nodes and 518 independent parameters, Asia has eight nodes and 18 independent parameters, and Cancer has five nodes and 11 independent parameters. We first needed to set the priors for each network. We use the standard approach [5] of eliciting a network and an equivalent sample size. In our experiments, we assumed that we had fairly good background knowledge of the domain. To simulate this, we obtained our prior by sampling a few hundred instances from the true network and used the counts (together with smoothing from a uniform prior) as our prior. This is akin to asking for a prior network from a domain expert, or using an existing set of complete data to find initial settings of the parameters. We then compared refining the parameters either by using active learning or by random sampling. We permitted the active learner to abstain from choosing a value for a controlled node if it did not wish to -- that node is then sampled as usual. Figure 1 presents the results for the three networks. The graphs compare the KLdivergence between the learned networks and the true network that is generating the data. We see that active learning provides a substantial improvement in all three networks. The improvement in the Alarm network is particularly striking given that we had control of just three of the 36 nodes. The extent of the improvement depends on the extent to which queries allow us to reach rare events. For example, Smoking is one of the controllable variables in the Asia network. In the original network, P(Smoking) = 0.5. Although there was a significant gain by using active learning in this network, we found that there was a greater increase in performance if we altered the generating network to have P(Smoking) = 0.9; this is the graph that is shown. We also experimented with specifying uniform priors with a small equivalent sample size. Here, we obtained significant benefit in the Asia network, and some marginal improvement in the other two. One possible reason is that the improvement is "washed out" by randomness, as the active learner and standard learner are learning from different instances. Another explanation is that the approximation in Eq. (8) may not hold as well when the prior p( 0) is uninformed and thereby easily perturbed even by a single instance. This indicates that our algorithm may perform best when refining an existing domain model. Overall, we found that in almost all situations active learning performed as well as or better than random sampling. The situations where active learning produced most benefit were, unsurprisingly, those in which the prior was confident and correct about the commonly occurring cases and uncertain and incorrect about the rare ones. Clearly, this is the precisely the scenario we are most likely to encounter in practice when the prior is elicited from an expert. By experimenting with forcing different priors we found that active learning was worse in one type of situation: where the prior was confident yet incorrect about the commonly occurring cases and uncertain but actually correct about the rare ones. This type of scenario is unlikely to occur in practice. Another factor affecting the performance was the degree to which the controllable nodes could influence the updateable nodes. 6 Discussion and Conclusions We have presented a formal framework and resulting querying algorithm for parameter estimation in Bayesian networks. To our knowledge, this is one of the first applications of active learning in an unsupervised context. Our algorithm uses parameter distributions to guide the learner to ask queries that will improve the quality of its estimate the most. BN active learning can also be performed in a causal setting. A query now acts as experiment - it intervenes in a model and forces variables to take particular values. Using Pearl's intervention theory [8], we can easily extend our analysis to deal with this case. The only difference is that the notion of an updateable node is even simpler any node that is not part of a query is updateable. Regrettably, space prohibits a more complete exposition. We have demonstrated that active learning can have significant advantages for the task of parameter estimation in BNs, particularly in the case where our parameter prior is of the type that a human expert is likely to provide. Intuitively, the benefit comes from estimating the parameters associated with rare events. Although it is less important to estimate the probabilities of rare events accurately, the number of instances obtained if we randomly sample from the distribution is still not enough. We note that this advantage arises even when we have used a loss function that considers only the accuracy of the distribution. In many practical settings such as medical or fault diagnosis, the rare cases are even more important, as they are often the ones that it is critical for the system to deal with correctly. A further direction that we are pursuing is active learning for the causal structure of a domain. In other words, we are presented with a domain whose causal structure we wish to understand and we want to know the best sequence of experiments to perform. Acknowledgements The experiments were performed using the PHROG system, developed primarily by Lise Getoor, Uri Lerner, and Ben Taskar. Thanks to Carlos Guestrin and Andrew Ng for helpful discussions. The work was supported by DARPA's information Assurance program under subcontract to SRI International, and by ARO grant DAAH0496-1-0341 under the MURI program "Integrated Approach to Intelligent Systems". References [1] A.c. Atkinson and A.N. Donev. Optimal Experimental Designs. Oxford University Press, 1992. [2] D. Cohn, Z. Ghahramani, and M. Jordan. Active learning with statistical models. Journal of Artificial intelligence Research, 4, 1996. [3] T.M Cover and J.A. Thomas. information Theory. Wiley, 1991. [4] M. H. DeGroot. Optimal Statistical Decisions. McGraw-Hill, New York, 1970. [5] D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20: 197-243, 1995. [6] S. L. Lauritzen and D. J. Spiegelhalter. Local computations with probabilities on graphical structures and their application to expert systems. J. Royal Statistical Society, B 50(2), 1988. [7] D. MacKay. Information-based objective functions for active data selection. Neural Computation,4:590-604, 1992. [8] J. Pearl. Causality: Models, Reasoning, and inference. Cambridge University Press, 2000. [9] H.S. Seung, M. Opper, and H. Sompolinsky. Query by committee. In Froc. COLT, pages 287294,1992.
2000
8
1,883
The Early Word Catches the Weights Mark A. Smith Garrison W. Cottrell Karen L. Anderson Department of Computer Science University of California at San Diego La Jolla, CA 92093 {masmith,gary,kanders}@cs.ucsd.edu Abstract The strong correlation between the frequency of words and their naming latency has been well documented. However, as early as 1973, the Age of Acquisition (AoA) of a word was alleged to be the actual variable of interest, but these studies seem to have been ignored in most of the literature. Recently, there has been a resurgence of interest in AoA. While some studies have shown that frequency has no effect when AoA is controlled for, more recent studies have found independent contributions of frequency and AoA. Connectionist models have repeatedly shown strong effects of frequency, but little attention has been paid to whether they can also show AoA effects. Indeed, several researchers have explicitly claimed that they cannot show AoA effects. In this work, we explore these claims using a simple feed forward neural network. We find a significant contribution of AoA to naming latency, as well as conditions under which frequency provides an independent contribution. 1 Background Naming latency is the time between the presentation of a picture or written word and the beginning of the correct utterance of that word. It is undisputed that there are significant differences in the naming latency of many words, even when controlling word length, syllabic complexity, and other structural variants. The cause of differences in naming latency has been the subject of numerous studies. Earlier studies found that the frequency with which a word appears in spoken English is the best determinant of its naming latency (Oldfield & Wingfield, 1965). More recent psychological studies, however, show that the age at which a word is learned, or its Age of Acquisition (AoA), may be a better predictor of naming latency. Further, in many multiple regression analyses, frequency is not found to be significant when AoA is controlled for (Brown & Watson, 1987; Carroll & White, 1973; Morrison et al. 1992; Morrison & Ellis, 1995). These studies show that frequency and AoA are highly correlated (typically r = -.6) explaining the confound of older studies on frequency. However, still more recent studies question this finding and find that both AoA and frequency are significant and contribute independently to naming latency (Ellis & Morrison, 1998; Gerhand & Barry, 1998,1999). Much like their psychological counterparts, connectionist networks also show very strong frequency effects. However, the ability of a connectionist network to show AoA effects has been doubted (Gerhand & Barry, 1998; Morrison & Ellis, 1995). Most of these claims are based on the well known fact that connectionist networks exhibit "destructive interference" in which later presented stimuli, in order to be learned, force early learned inputs to become less well represented, effectively increasing their associated errors. However, these effects only occur when training ceases on the early patterns. Continued training on all the patterns mitigates the effects of interference from later patterns. Recently, Ellis & Lambon-Ralph (in press) have shown that when pattern presentation is staged, with one set of patterns initially trained, and a second set added into the training set later, strong AoA effects are found. They show that this result is due to a loss of plasticity in the network units, which tend to get out of the linear range with more training. While this result is not surprising, it is a good model of the fact that some words may not come into existence until late in life, such as "email" for baby boomers. However, they explicitly claim that it is important to stage the learning in this way, and offer no explanation of what happens during early word acquisition, when the surrounding vocabulary is relatively constant, or why and when frequency and AoA show independent effects. In this paper, we present an abstract feed-forward computational model of word acquisition that does not stage inputs. We use this model to examine the effects of frequency and AoA on sum squared error, the usual variable used to model reaction time. We find a consistent contribution of AoA to naming latency, as well as the conditions under which there is an independent contribution from frequency in some tasks. 2 Experiment 1: Do networks show AoA effects? Our first goal was to show that AoA effects could be observed in a connectionist network using the simplest possible model. First, we need to define AoA in a network. We did this is such a way that staging the inputs was not necessary: we defined a threshold for the error, after which we would say a pattern has been "acquired." The AoA is defined to be the epoch during which this threshold is crossed. Since error for a particular pattern may occasionally go up again during online learning, we also measured the last epoch that the pattern went below the threshold for final time. We analyzed our networks using both definitions of acquisition (which we call first acquisition and final acquisition), and have found that the results vary little between these definitions. In what follows, we use first acquisition for simplicity. 2.1 The Model The simplest possible model is an autoencoder network. Using a network architecture of 20-15-20, we trained the network to autoencode 200 patterns of random bits (each bit had a 50% probability of being on or off). We initialized weights randomly with a flat distribution of values between 0.1 and -0.1, used a learning rate of 0.001 and momentum of 0.9. For this experiment, we chose the AoA threshold to be 2, indicating an average squared error of .1 per input bit, yielding outputs much closer to the correct output than any other. We calculated Euclidean distances between all outputs and patterns to verify that the input was mapped most closely to the correct output. Training on the entire corpus continued until 98% of all patterns fell below this threshold. 2.2 Results After the network had learned the input corpus, we investigated the relationship between the epoch at which the input vector had been learned and the final sum squared error (equivalent, for us, to "adult" naming latency) for that input vector. These results are presented in Figure 1. The relationship between the age of acquisition of the input vector and its '1'$Iac:qllSlll C)n ... ' 1'$Iac:quisllon reW""sion final oc;qllS~lon " Inalac:q''''lonmW''''slon ',~-;:,OOO-= ~-----;~=-,----C_:::---=,OO,------,OO=,-----:~=---;:"OO~~' EpDdlo1 L ....... In~ Figure 1: Exp. 1. Final SSE vs. AoA. Freq.w:ncyDlAppe"''''''''' , rT----~----~----~------, ,,~----~-===~,; ,, ====~;===~ PallarnNumbe, Figure 3: Exp. 2 Frequency Distribution '~,----C ,oo:::---=~,-----;~=,~ ,oo=---=oo,~=~-----;ro=-,~ ",~ , ~~ EpoMNlI'11Df1' Figure 2: SSs.m~t!:P~9!!..!?'y Percentile ''-~~7",.~~~",~"OO~ ' ~~~~~~--' ~,staoq""!""""IJ'assoon "nataoqu'srtoon " ',nataoqltlS,bon'''IJ'assoon .' "!--'::'OO:-, ---:::'------:;:::-=MOO:-::'OO=";--;;;;",,:::-OO ---:,=."":;--;:"!::,,,,'---::!,,oo:::-, ---;:!",oo, EpochDlLoatnlr'lg Figure 4: Exp. 2 Final SSE vs. AoA final sum squared error is clear: the earlier an input is learned, the lower its final error will be. A more formal analysis of this relationship yields a significant (p « .005) correlation coefficient of r=0.749 averaged over 10 runs of the network. In order to understand this relationship better, we divided the learned words into five percentile groups depending upon AoA. Figure 2 shows the average SSE for each group plotted over epoch number. The line with the least average SSE corresponds to the earliest acquired quintile while the line with the highest average SSE corresponds to the last acquired quintile. From this graph we can see that the average SSE for earlier learned patterns stays below errors for late learned patterns. This is true from the outset of learning as well as when the error starts to decrease less rapidly as it asymptotically approaches some lowest error limit. We sloganize this result as "the patterns that get to the weights first, win." 3 Experiment 2: Do AoA effects survive a frequency manipulation? Having displayed that AoA effects are present in connectionist networks, we wanted to investigate the interaction with frequency. We model the frequency distribution of inputs after the known English spoken word frequency in which very few words appear very often while a very large portion of words appear very seldom (Zipf's law). The frequency distribution we used (presentation probability= 0.05 + 0.95 * ((1 - (l.O/numinputs) * inpuLnumber) +0.05)10) is presented in Figure 3 (a true version of Zipf's law still shows the result). Otherwise, all parameters are the same as Exp. 1. 3.1 Results Results are plotted in Figure 4. Here we find again a very strong and significant (p « 0.005) correlation between the age at which an input is learned and its naming latency. The correlation coefficient averaged over 10 runs is 0.668. This fits very well with known data. Figure 5 shows how the frequency of presentation of a given stimulus correlates with NamU"lgLatoocyvs Ff"'loonc'f FflllqU9flC'f'VS AgeofA<:qlllslOOn + + h9QOOflCll + 1 8 ! frOCJllOnc'f 'C9'GSSlOn 1 6 : ~ 14000 .. .. i 12000 ..... .. ~ 10000 ... i 8000 :+ .. Li. 11000 .:4000 ~5t +:t ~OO~.~. ~. , o 2000 4DOO 6000 Ff~ ofAppe"n ... "" Figure 5: Exp. 2 Frequency vs. SSE Figure 6: Exp. 2 AoA vs. Frequency naming latency. We find that the best fitting correlation is an exponential one in which naming latency correlates most strongly with the log of the frequency. The correlation coefficient averaged over 10 runs is significant (p « 0.005) at -0.730. This is a slightly stronger correlation than is found in the literature. Finally, figure 6 shows how frequency and AoA are related. Again, we find a significant (p < 0.005) correlation coefficient of -0.283 averaged over 10 runs. However, this is a much weaker correlation than is found in the literature. Performing a multiple regression with the dependent variable as SSE and the two explaining variables as AoA and log frequency, we find that both AoA and log frequency contribute significantly (p« 0.005 for both variables) to the regression equation. Whereas AoA correlates with SSE at 0.668 and log frequency correlates with SSE at -0.730, the multiple correlation coefficient averaged over 10 runs is 0.794. AoA and log frequency each make independent contributions to naming latency. We were encouraged that we found both effects of frequency and AoA on SSE in our model, but were surprised by the small size of the correlation between the two. The naming literature shows a strong correlation between AoA and frequency. However, pilot work with a smaller network showed no frequency effect, which was due to the autoencoding task in a network where the patterns filled 20% of the input space (200 random patterns in a 10-8-10 network, with 1024 patterns possible). This suggests that autoencoding is not an appropriate task to model naming, and would give rise to the low correlation between AoA and frequency. Indeed, English spellings and their corresponding sounds are certainly correlated, but not completely consistent, with many exceptional mappings. Spelling-sound consistency has been shown to have a significant effect on naming latency (Jared, McRae, & Seidenberg, 1990). Object naming, another task in which AoA effects are found, is a completely arbitrary mapping. Our third experiment looks at the effect that consistency of our mapping task has on AoA and frequency effects. 4 Experiment 3: Consistency effects Our model in this experiment is identical to the previous model except for two changes. First, to encode mappings with varying degrees of consistency, we needed to increase the number of hidden units to 50, resulting in a 20-50-20 architecture. Second, we found that some patterns would end up with one bit off, leading to a bimodal distribution of SSE's. We thus used cross-entropy error to ensure that all bits would be learned. Eleven levels of consistency were defined; from 100% consistent, or autoencoding; to 0% consistent, or a mapping from one random 20 bit vector to another random 20 bit vector. Note that in a 0% consistent mapping, since each bit as a 50% chance of being on, about 50% of the bits will be the same by chance. Thus an intermediate level of 50% consistency will have on average 75% of the corresponding bits equal. ,,, .. ,"" Com:rlahonStrOO!1hvs MappmgConSlSt<lOCy I I h>A and RMSE Iog fr""",ncyandRMSEt" (Iog flO<tJ(lncyandAoA) AO 60 MaJlI'IngCon ... teooy .~ , .... . ",. AlItoencodlng VlIr1ableSognificanoo vsConsIstency ""' lOO(hequency) ' .L ~~L-~~~--~----~--~ , oo Arbitrary ConSIstency A.-.oodong Figure 7: Exp. 3 R-values vs. Consistency Figure 8: Exp. 3 P-values vs. Consistency 4.1 Results Using this scheme, ten runs at each consistency level were performed. Correlation coefficients between AoA and naming latency (RMSE), log(frequency) and naming latency, and AoA and log(frequency) were examined. These results can be found in Figure 7. It is clear that AoA exhibits a strong effect on RMSE at all levels of consistency, peaking at a fully consistent mapping. We believe that this may be due to the weaker effect of frequency when all patterns are consistent, and each pattern is supporting the same mapping. Frequency also shows a strong effect on RMSE at all levels of consistency, with its influence being lowest in the autoencoding task, as expected. Most interesting is the correlation strength between AoA and frequency across consistency levels. While we do not yet have a good explanation for the dip in correlation at the 80-90% level of consistency, it provides a possible explanation of the multiple regression data we describe next. Multiple regressions with the dependent variable as error and explaining variables as log(frequency) and AoA were performed. In Figure 8, we plot the negative log of the pvalue of AoA and log(frequency) in the regression equation over consistency levels. Most notable is the finding that AoA is significant at extreme levels at all levels of consistency. A value of 30 on this plot corresponds to a p-value of 10-30 . Significance of log frequency has a more complex interaction with consistency. Log frequency does not achieve significance in determining SSE until the patterns are almost 40% consistent. For more consistent mappings, however, significance increases dramatically to a P-value of less than 10-10 and then declines toward autoencoding. The data which may help us to explain what we see in Figure 8 actually lies in Figure 7. There is a relationship between log frequency significance and the correlation strength between AoA and log frequency. As AoA and frequency become less correlated, the significance of frequency increases, and vice-versa. Therefore, as frequency and AoA become less correlated, frequency is able to begin making an independent contribution to the SSE of the network. Such interactions may explain the sometimes inconsistent findings in the literature; depending upon the task and the individual items in the stimuli, different levels of consistency of mapping can affect the results. However, each of these points represent an average over a set of networks with one average consistency value. It is doubtful that any natural mapping, such as spelling to sound, has such a uniform distribution. We rectify this in the next experiment. 5 Experiment 4: Modelling spelling-sound correspondences Our final experiment is an abstract simulation of learning to read, both in terms of word frequency and spelling-sound consistency. Most English words are considered consistent in their spelling-sound relationship. This depends on whether words in their spelling "neighborhood" agree with them in pronunciation, e.g., "cave," "rave," and "pave." However, a small but important portion of our vocabulary consists of inconsistent words, e.g., "have." . lnw'Fr"'l FroquoncyolConsl&IOOIandlnoonSiSlOnlPMtOmsvs SSE Fr"'lOO""" Inco,,",stonlPaltems CooslstonlPl!lloms , Hg:.Fr"'l Figure 9: Exp. 4 Consistency vs. frequency Inoon5lstonlPl!llems ConSistontPliftorns Figure 10: Exp. 4 Consistency AoA The reason that "have" continues to be pronounced inconsistently is because it is a very frequent word. Inconsistent words have the property that they are on average much more frequent than consistent words although there are far more consistent words by number. To model this we created an input corpus of 170 consistent words and 30 inconsistent words. Inconsistent words were arbitrarily defined as 50% consistent, or an average of 5 bit flips in a 20 bit pattern; consistent words were modeled as 80% consistent, or an average of 2 bit flips per pattern. The 30 inconsistent words were presented with high frequencies corresponding to the odd numbered patterns (1..59) in Figure 3. The even numbered patterns from 2 to 60 were the consistent words. The remaining patterns were also consistent. This allowed us to compare consistent and inconsistent words in the same frequency range, controlling for frequency in a much cleaner manner than is possible in human subject experiments. The network is identical to the one in Experiment 3. 5.1 Results We first analyzed the data for the standard consistency by frequency interaction. We labeled the 15 highest frequency consistent and inconsistent patterns as "high frequency" and the next 15 of each as the "low frequency" patterns, in order to get the same number of patterns in each cell, by design. The results are shown in Figure 9, and show the standard interaction. More interestingly, we did a post-hoc median split of the patterns based on their AoA, defining them as "early" or "late" in this way, and then divided them based on consistency. This is shown in Figure 10. An ANOVA using unequal cell size corrections shows a significant (p < .001) interaction between AoA and consistency. 6 Discussion Although the possibility of Age of Acquisition effects in connectionist networks has been doubted, we found a very strong, significant, and reproducible effect of AoA on SSE, the variable most often used to model reaction time, in our networks. Patterns which are learned in earlier epochs consistently show lower final error values than their late acquired counterparts. In this study, we have shown that this effect is present across various learning tasks, network topologies, and frequencies. Informally, we have found AoA effects across more network variants than reported here, including different learning rates, momentum, stopping criterion, and frequency distributions. In fact, across all runs we conducted for this study, we found strong AoA effects, provided the network was able to learn its task. We believe that this is because AoA is an intrinsic property of connectionist networks. We have performed some preliminary analyses concerning which patterns are acquired early. Using the setup of Experiment 1, that is, autoencoded 20 bit patterns, we have found that the patterns that are most correlated with the other patterns in the training set tend to be the earliest acquired, with r2 = 0.298. (We should note that interpattern correlations are very small, but positive, because no bits are negative). Thus patterns that are most consistent with the training set are learned earliest. We have yet to investigate how this generalizes to arbitrary mappings, but, given the results of Experiment 4, it makes sense to predict that the most frequent, most consistently mapped patterns (e.g., in the largest spelling-sound neighborhood) would be the earliest acquired, in the absence of other factors. 7 Future Work This study used a very general network and learning task to demonstrate AoA effects in connectionist networks. There is therefore no reason to suspect that this effect is limited to words, and indeed, AoA effects have been found in face recognition. Meanwhile, we have not investigated the interaction of our simple model of AoA effects with staged presentation. Presumably words acquired late are fewer in number, and Ellis & Lambon-Ralph (in press) have shown that they must be extremely frequent to overcome their lateness. Our results suggest that patterns that are most consistent with earlier acquired mappings would also overcome their lateness. We are particularly interested in applying these ideas to a realistic model English reading acquisition, where actual consistency effects can be measured in the context of friend/enemy ratios in a neighborhood. Finally, we would like to explore whether the AoA effect is universal in connectionist networks, or if under some circumstances AoA effects are not observed. Acknowledgements We would like to thank the Elizabeth Bates for alerting us to the work of Dr. Andrew Ellis, and for the latter for providing us with a copy of Ellis & Lambon-Ralph (in press). References [1] Brown, G.D.A & Watson, EL. (1987) First in, first out: Word learning age and spoken word frequency as predictors of word familiarity and naming latency. Memory & Cognition, 15:208-216 [2] CarroJl, J.B. & White, M.N. (1973). Word frequency and age of acquisition as determiners of picture-naming latency. Quarterly Journal of Psychology, 25 pp. 85-95 [3] Ellis, AW. & Morrison, C.M. (1998). Real age of acquisition effects in lexical retrieval. Journal of Experimental Psychology: Learning, Memory, & Cognition, 24 pp. 515-523 [4] Ellis, AW. & Larnbon Ralph, M.A (in press). Age of Acquisition effects in adult lexical processing reflect loss of plasticity in maturing systems: Insights from connectionist networks. JEP: LMC. [5] Gerhand, S. & Barry, C. (1998). Word frequency effects in oral reading are not merely Age-ofAcquisition effects in disguise. JEP:LMC, 24 pp. 267-283. [6] Gerhand, S. & Barry, C. (1999). Age of acquisition and frequency effects in speeded word naming. Cognition, 73 pp. B27-B36 [7] Jared, D., McRae, K., & Seidenberg, M.S. (1990). The Basis of Consistency Effects in Word Naming. JML, 29pp. 687-715 [8] Morrison, C.M., Ellis, AW. & Quinlan, P.T. (1992). Age of acquisition, not word frequency, affects object naming, nor object recognition. Memory & Cognition, 20 pp. 705-714 [9] Morrison, C.M. & Ellis, AW. (1995). Roles of Word Frequency and Age of Acquisition in Word Naming and Lexical Decision. JEP:LMC, 21 pp. 116-133 [10] Oldfield, R.C. & Wingfield, A (1965) Response latencies in naming objects. Quarterly Journal of Experimental Psychology 17, pp. 273-281.
2000
80
1,884
The Manhattan World Assumption: Regularities in scene statistics which enable Bayesian inference James M. Coughlan Smith-Kettlewell Eye Research Inst. 2318 Fillmore St. San Francisco, CA 94115 coughlan@ski.org A.L. Yuille Smith-Kettlewell Eye Research Inst. 2318 Fillmore St. San Francisco, CA 94115 yuille@ski.org Abstract Preliminary work by the authors made use of the so-called "Manhattan world" assumption about the scene statistics of city and indoor scenes. This assumption stated that such scenes were built on a cartesian grid which led to regularities in the image edge gradient statistics. In this paper we explore the general applicability of this assumption and show that, surprisingly, it holds in a large variety of less structured environments including rural scenes. This enables us, from a single image, to determine the orientation of the viewer relative to the scene structure and also to detect target objects which are not aligned with the grid. These inferences are performed using a Bayesian model with probability distributions (e.g. on the image gradient statistics) learnt from real data. 1 Introduction In recent years, there has been growing interest in the statistics of natural images (see Huang and Mumford [4] for a recent review). Our focus, however, is on the discovery of scene statistics which are useful for solving visual inference problems. For example, in related work [5] we have analyzed the statistics of filter responses on and off edges and hence derived effective edge detectors. In this paper we present results on statistical regularities of the image gradient responses as a function of the global scene structure. This builds on preliminary work [2] on city and indoor scenes. This work observed that such scenes are based on a cartesian coordinate system which puts (probabilistic) constraints on the image gradient statistics. Our current work shows that this so-called "Manhattan world" assumption about the scene statistics applies far more generally than urban scenes. Many rural scenes contain sufficient structure on the distribution of edges to provide a natural cartesian reference frame for the viewer. The viewers' orientation relative to this frame can be determined by Bayesian inference. In addition, certain structures in the scene stand out by being unaligned to this natural reference frame. In our theory such structures appear as "outlier" edges which makes it easier to detect them. Informal evidence that human observers use a form of the Manhattan world assumption is provided by the Ames room illusion, see figure (6), where the observers appear to erroneously make this assumption, thereby grotesquely distorting the sizes of objects in the room. 2 Previous Work and Three- Dimensional Geometry Our preliminary work on city scenes was presented in [2]. There is related work in computer vision for the detection of vanishing points in 3-d scenes [1], [6] (which proceeds through the stages of edge detection, grouping by Hough transforms, and finally the estimation of the geometry). We refer the reader to [3] for details on the geometry of the Manhattan world and report only the main results here. Briefly, we calculate expressions for the orientations of x, y, z lines imaged under perspective projection in terms of the orientation of the camera relative to the x, y, z axes. The camera orientation relative to the xyz axis system may be specified by three Euler angles: the azimuth (or compass angle) a, corresponding to rotation about the z axis, the elevation (3 above the xy plane, and the twist'Y about the camera's line of sight. We use ~ = (a, (3, 'Y) to denote all three Euler angles of the camera orientation. Our previous work [2] assumed that the elevation and twist were both zero which turned out to be invalid for many of the images presented in this paper. We can then compute the normal orientation of lines parallel to the x, y, z axes, measured in the image plane, as a function of film coordinates (u, v) and the camera orientation ~. We express the results in terms of orthogonal unit camera axes ii, b and c, which are aligned to the body of the camera and are determined by ~. For x lines (see Figure 1, left panel) we have tan Ox = -(ucx + fax)/(vcx + fbx), where Ox is the normal orientation of the x line at film coordinates (u, v) and f is the focal length of the camera. Similarly, tanOy = -(ucy + fay)/(vcy + fby) for y lines and tanOz = -(ucz + faz)/(vcz + fbz) for z lines. In the next section will see how to relate the normal orientation of an object boundary (such as x,y,z lines) at a point (u, v) to the magnitude and direction of the image gradient at that location. I ~ e ~ u vanishing point ~I I~ Figure 1: (Left) Geometry of an x line projected onto (u,v) image plane. 0 is the normal orientation of the line in the image. (Right) Histogram of edge orientation error (displayed modulo 180°). Observe the strong peak at 0°, indicating that the image gradient direction at an edge is usually very close to the true normal orientation of the edge. 3 Pon and Poff : Characterizing Edges Statistically Since we do not know where the x, y, z lines are in the image, we have to infer their locations and orientations from image gradient information. This inference is done using a purely local statistical model of edges. A key element of our approach is that it allows the model to infer camera orientation without having to group pixels into x, y, z lines. Most grouping procedures rely on the use of binary edge maps which often make premature decisions based on too little information. The poor quality of some of the images - underexposed and overexposed - makes edge detection particularly difficult, as well as the fact that some of the images lack x, y, z lines that are long enough to group reliably. Following work by Konishi et al [5], we determine probabilities Pon(Ea) and POf!(Ea) for the probabilities of the image gradient magnitude Ea at position it in the image conditioned on whether we are on or off an edge. These distributions quantify the tendency for the image gradient to be high on object boundaries and low off them, see Figure 2. They were learned by Konishi et al for the Sowerby image database which contains one hundred presegmented images. Figure 2: POf!(Y) (left) and Pon(y)(right), the empirical histograms of edge responses off and on edges, respectively. Here the response y = IV II is quantized to take 20 values and is shown on the horizontal axis. Note that the peak of POf!(Y) occurs at a lower edge response than the peak of Pon (y). We extend the work of Konishi et al by putting probability distributions on how accurately the image gradient direction estimates the true normal direction of the edge. These were learned for this dataset by measuring the true orientations of the edges and comparing them to those estimated from the image gradients. This gives us distributions on the magnitude and direction of the intensity gradient Pon CEaIB), Pof! CEa), where Ea = (Ea, CPa), B is the true normal orientation of the edge, and CPa is the gradient direction measured at point it = (u, v). We make a factorization assumption that Pon(EaIB) = Pon(Ea)Pang(CPa - B) and POf!(Ea) = Pof!(Ea)U(cpa). Pang(.) (with argument evaluated modulo 271" and normalized to lover the range 0 to 271") is based on experimental data, see Figure 1 (right), and is peaked about 0 and 71". In practice, we use a simple box-shaped function to model the distribution: Pang (r5B) = (1 - f)/47 if r5B is within angle 7 of 0 or 71", and f/(271" - 47) otherwise (i.e. the chance of an angular error greater than ±7 is f ). In our experiments f = 0.1 and 7 = 4° for indoors and 6° outdoors. By contrast, U(.) = 1/271" is the uniform distribution. 4 Bayesian Model We devised a Bayesian model which combines knowledge of the three-dimensional geometry of the Manhattan world with statistical knowledge of edges in images. The model assumes that, while the majority of pixels in the image convey no information about camera orientation, most of the pixels with high edge responses arise from the presence of x, y, z lines in the three-dimensional scene. An important feature of the Bayesian model is that it does not force us to decide prematurely which pixels are on and off an object boundary (or whether an on pixel is due to x,y, or z), but allows us to sum over all possible interpretations of each pixel. The image data Eil at a single pixel u is explained by one of five models mil: mil = 1,2,3 mean the data is generated by an edge due to an x, y, z line, respectively, in the scene; mil = 4 means the data is generated by an outlier edge (not due to an x, y, z line); and mil = 5 means the pixel is off-edge. The prior probability P(mil) of each of the edge models was estimated empirically to be 0.02,0.02,0.02,0.04,0.9 for mil = 1,2, ... , 5. Using the factorization assumption mentioned before, we assume the probability of the image data Eil has two factors, one for the magnitude of the edge strength and another for the edge direction: P(Eillmil, ~,u) = P(Eillmil)P(¢illmil, ~,u) (1) where P(Eillmil) equals Po/!(Eil) if mil = 5 or Pon(Eil) if mil # 5. Also, P(¢illmil, ~,u) equals Pang(¢il-O(~,mil'U)) if mil = 1,2,3 or U(¢il) if mil = 4,5. Here O(~, mil, u)) is the predicted normal orientation of lines determined by the equation tan Ox = -(ucx+ fax)/(vcx+ fbx) for x lines, tanOy = -(ucy+ fay)/(vcy+ fby) for y lines, and tanOz = -(ucz + faz)/(vcz + fbz) for z lines. In summary, the edge strength probability is modeled by Pon for models 1 through 4 and by po/! for model 5. For models 1,2 and 3 the edge orientation is modeled by a distribution which is peaked about the appropriate orientation of an x, y, z line predicted by the camera orientation at pixel location u; for models 4 and 5 the edge orientation is assumed to be uniformly distributed from 0 through 27f. Rather than decide on a particular model at each pixel, we marginalize over all five possible models (i.e. creating a mixture model): 5 P(Eill~,u) = 2: P(Eillmil, ~,u)P(mil) (2) mit=l Now, to combine evidence over all pixels in the image, denoted by {Ea}, we assume that the image data is conditionally independent across all pixels, given the camera orientation ~: P({Ea}I~) = II P(Eill~,u) (3) il (Although the conditional independence assumption neglects the coupling of gradients at neighboring pixels, it is a useful approximation that makes the model computationally tractable.) Thus the posterior distribution on the camera orientation is given by nil P(Eill~, U)P(~)/Z where Z is a normalization factor and P(~) is a uniform prior on the camera orientation. To find the MAP (maximum a posterior) estimate, our algorithm maximizes the log posterior term log[P({Eil}I~)P(~)] = logP(~) + L:illog[L:muP(Eillmil,~,u)P(mil)] numerically by searching over a quantized set of compass directions ~ in a certain range. For details on this procedure, as well as coarse-to-fine techniques for speeding up the search, see [3]. 5 Experimental Results This section presents results on the domains for which the viewer orientation relative to the scene can be detected using the Manhattan world assumption. In particular, we demonstrate results for: (I) indoor and outdoor scenes (as reported in [2]), (II) rural English road scenes, (III) rural English fields, (IV) a painting of the French countryside, (V) a field of broccoli in the American mid-west, (VI) the Ames room, and (VII) ruins of the Parthenon (in Athens). The results show strong success for inference using the Manhattan world assumption even for domains in which it might seem unlikely to apply. (Some examples of failure are given in [3]. For example, a helicopter in a hilly scene where the algorithm mistakenly interprets the hill silhouettes as horizontal lines ). The first set of images were of city and indoor scenes in San Francisco with images taken by the second author [2]. We include four typical results, see figure 3, for comparison with the results on other domains. Figure 3: Estimates of the camera orientation obtained by our algorithm for two indoor scenes (left) and two outdoor scenes (right). The estimated orientations of the x, y lines, derived for the estimated camera orientation q!, are indicated by the black line segments drawn on the input image. (The z line orientations have been omitted for clarity.) At each point on a sub grid two such segments are drawn - one for x and one for y. In the image on the far left, observe how the x directions align with the wall on the right hand side and with features parallel to this wall. The y lines align with the wall on the left (and objects parallel to it). We now extend this work to less structured scenes in the English countryside. Figure (4) shows two images of roads in rural scenes and two fields. These images come from the Sowerby database. The next three images were either downloaded from the web or digitized (the painting). These are the mid-west broccoli field, the Parthenon ruins, and the painting of the French countryside. 6 Detecting Objects in Manhattan world We now consider applying the Manhattan assumption to the alternative problem of detecting target objects in background clutter. To perform such a task effectively requires modelling the properties of the background clutter in addition to those of the target object. It has recently been appreciated that good statistical modelling of the image background can improve the performance of target recognition [7]. The Manhattan world assumption gives an alternative way of probabilistically modelling background clutter. The background clutter will correspond to the regular structure of buildings and roads and its edges will be aligned to the Manhattan grid. The target object, however, is assumed to be unaligned (at least, in part) to this grid. Therefore many of the edges of the target object will be assigned to model 4 by the algorithm. (Note the algorithm first finds the MAP estimate q!* of the Figure 4: Results on rural images in England without strong Manhattan structure. Same conventions as before. Two images of roads in the countryside (left panels) and two images of fields (right panel). Figure 5: Results on an American mid-west broccoli field, the ruins of the Parthenon, and a digitized painting of the French countryside. compass orientation, see section (4), and then estimates the model by doing MAP of P(ma!Ea, ~*,'it) to estimate ma for each pixel 'it.) This enables us to significantly simplify the detection task by removing all edges in the images except those assigned to model 4. The Ames room, see figure (6), is a geometrically distorted room which is constructed so as to give the false impression that it is built on a cartesian coordinate frame when viewed from a special viewpoint. Human observers assume that the room is indeed cartesian despite all other visual cues to the contrary. This distorts the apparent size of objects so that, for example, humans in different parts of the room appear to have very different sizes. In fact, a human walking across the room will appear to change size dramatically. Our algorithm, like human observers, interprets the room as being cartesian and helps identify the humans in the room as outlier edges which are unaligned to the cartesian reference system. 7 Summary and Conclusions We have demonstrated that the Manhattan world assumption applies to a range of images, rural and otherwise, in addition to urban scenes. We demonstrated a Bayesian model which used this assumption to infer the orientation of the viewer relative to this reference frame and which could also detect outlier edges which are unaligned to the reference frame. A key element of this approach is the use of image gradient statistics, learned from image datasets, which quantify the distribution of the image gradient magnitude and direction on and off object boundaries. We expect that there are many further image regularities of this type which can be used for building effective artificial vision systems and which are possibly made use of by biological vision systems. f'-~, ..-\:,'. . \. l - . i .1 ·;;t.J-' Figure 6: Detecting people in Manhattan world. The left images (top and bottom) show the estimated scene structure. The right images show that people stand out as residual edges which are unaligned to the Manhattan grid. The Ames room (top panel) violates the Manhattan assumption but human observers, and our algorithm, interpret it as if it satisfied the assumptions. In fact, despite appearances, the two people in the Ames room are really the same size. Acknowledgments We want to acknowledge funding from NSF with award number IRI-9700446, support from the Smith-Kettlewell core grant, and from the Center for Imaging Sciences with Army grant ARO DAAH049510494. This work was also supported by the National Institute of Health (NEI) with grant number R01-EY 12691-01. It is a pleasure to acknowledge email conversations with Song Chun Zhu about scene clutter. We gratefully acknowledge the use ofthe Sowerby image dataset from Sowerby Research Centre, British Aerospace. References [1] B. Briliault-O'Mahony. "New Method for Vanishing Point Detection". Computer Vision, Graphics, and Image Processing. 54(2). pp 289-300. 1991. [2] J. Coughlan and A.L. Yuille. "Manhattan World: Compass Direction from a Single Image by Bayesian Inference" . Proceedings International Conference on Computer Vision ICCV'99. Corfu, Greece. 1999. [3] J. Coughlan and A.L. Yuille. "Manhattan World: Orientation and Outlier Detection by Bayesian Inference." Submitted to International Journal of Computer Vision. 2000. [4] J. Huang and D. Mumford. "Statistics of Natural Images and Models". In Proceedings Computer Vision and Pattern Recognition CVPR'99. Fort Collins, Colorado. 1999. [5] S. Konishi, A. L. Yuille, J. M. Coughlan, and S. C. Zhu. "Fundamental Bounds on Edge Detection: An Information Theoretic Evaluation of Different Edge Cues." Proc. Int'l con/. on Computer Vision and Pattern Recognition, 1999. [6] E. Lutton, H. Maitre, and J. Lopez-Krahe. "Contribution to the determination of vanishing points using Hough transform". IEEE Trans. on Pattern Analysis and Machine Intelligence. 16(4). pp 430-438. 1994. [7] S. C. Zhu, A. Lanterman, and M. I. Miller. "Clutter Modeling and Performance Analysis in Automatic Target Recognition". In Proceedings Workshop on Detection and Classification of Difficult Targets. Redstone Arsenal, Alabama. 1998.
2000
81
1,885
Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics Thomas NatschIager & Wolfgang Maass Institute for Theoretical Computer Science Technische Universitat Graz, Austria {tna t schl,maass}@i gi.tu-graz. ac . at Eduardo D. Sontag Dept. of Mathematics Rutgers University New Brunswick, NJ 08903, USA sont ag@hilbert. r ut ge rs . e du Abstract Anthony Zador Cold Spring Harbor Laboratory 1 Bungtown Rd Cold Spring Harbor, NY 11724 zador@cshl. org Experimental data show that biological synapses behave quite differently from the symbolic synapses in common artificial neural network models. Biological synapses are dynamic, i.e., their "weight" changes on a short time scale by several hundred percent in dependence of the past input to the synapse. In this article we explore the consequences that these synaptic dynamics entail for the computational power of feedforward neural networks. We show that gradient descent suffices to approximate a given (quadratic) filter by a rather small neural system with dynamic synapses. We also compare our network model to artificial neural networks designed for time series processing. Our numerical results are complemented by theoretical analysis which show that even with just a single hidden layer such networks can approximate a surprisingly large large class of nonlinear filters: all filters that can be characterized by Volterra series. This result is robust with regard to various changes in the model for synaptic dynamics. 1 Introduction More than two decades of research on artificial neural networks has emphasized the central role of synapses in neural computation. In a conventional artificial neural network, all units ("neurons") are assumed to be identical, so that the computation is completely specified by the synaptic "weights," i. e. by the strengths of the connections between the units. Synapses in common artificial neural network models are static: the value Wij of a synaptic weight is assumed to change only during "learning". In contrast to that, the "weight" Wij (t) of a biological synapse at time t is known to be strongly dependent on the inputs Xj(t - T) that this synapse has received from the presynaptic neuron i at previous time steps t T, see e.g. [1]. We will focus in this article on mean-field models for populations of neurons connected by dynamic synapses. A 1 B 1 C 1 \ pure depression facilitation and 0.5 pure facilitation 0.5 0.5 depression 00 100 200 00 100 200 00 100 200 time time time Figure 1: A dynamic synapse can produce quite different outputs for the same input. The response of a single synapse to a step increase in input activity applied at time step 0 is compared for three different parameter settings. Several models for single synapses have been proposed for the dynamic changes in synaptic efficacy. In [2] the model of [3] is extended to populations of neurons where the current synaptic efficacy Wij (t) between a population j and a population i at time t is modeled as a product of a facilitation term lij (t) and a depression term dij (t) scaled by the factor Wij . We consider a time discrete version of this model defined as follows: Wij ( t) = Wij . hj ( t) . dij ( t) (1) (2) lij(t) lij(t + 1) = hj(t) - ~ + Uij . (1- hj(t)) . Xj(t) 'J dij(t + 1) = dij(t) + 1-~~j(t) - lij(t). dij(t)· Xj(t) 'J (3) (4) hj(t) = hj(t) . (1- Uij) + Uij with dij (0) = 1 and hj (0) = O. Equation (2) models facilitation (with time constant Fij ), whereas equation (3) models the combined effects of synaptic depression (with time constant D ij) and facilitation. Depending on the values of the characteristic parameters Uij, Dij , Fij a synaptic connection (ij) maps an input function Xj(t) into the corresponding time varying synaptic output Wij (t) . Xj (t). The same input Xj (t) can yield markedly different outputs Wij (t) . Xi (t) for different values of the characteristic parameters Uij, Dij, Fij. Fig. 1 compares the output for three different sets of values for the parameters Uij, Dij , Fij . These examples illustrate just three of the range of input-output behaviors that a single synapse can achieve. In this article we will consider feedforward networks coupled by dynamic synapses. One should think of the computational units in such a network as populations of spiking neurons. We refer to such networks as "dynamic networks", see Fig. 2 for details. hidden units dynamic synapses Figure 2: The dynamic network model. The output Xi(t) of the itk unit is given by Xi(t) = u(Ej Wij(t) . Xj(t)), where u is either the sigmoid function u(u) = 1/(1 + exp(-u)) (in the hidden layers) or just the identity function u( u) = u (in the output layer) and Wij (t) is modeled according to Equ. (1) to (4). In Sections 2 and 3 we demonstrate (by employing gradient descent to find appropriate values for the parameters Uij, D ij , Fij and Wij) that even small dynamic networks can compute complex quadratic filters. In Section 4 we address the question which synaptic parameters are important for a dynamic network to learn a given filter. In Section 5 we give a precise mathematical characterization of the computational power of such dynamic networks. 2 Learning Arbitrary Quadratic Filters by Dynamic Networks In order to analyze which filters can be approximated by small dynamic networks we investigate the task of learning a quadratic filter Q randomly chosen from a class Qm. The class Qm consists of all quadratic filters Q whose output (Qx) (t) in response to the input time series x(t) is defined by some symmetric m x m matrix HQ = [hkd of filter coefficients hkl E ~ k = 1 .. . m, l = l ... m through the equation (Qx)(t) = Z=;:1 Z=~=1 hkl x(t - k) x(t - l) . An example of the input and output for one choice of quadratic parameters (m = 10) are shown in Figs. 3B and 3C, respectively. We view such filter Q as an example for the kinds of complex transformations that are important to an organism's survival, such as those required for motor control and the processing of time-varying sensory inputs. For example, the spectrotemporal receptive field of a neuron in the auditory cortex [4] reflects some complex transformation of sound pressure to neuronal activity. The real transformations actually required may be very complex, but the simple filter Q provides a useful starting point for assessing the capacity of this architecture to transform one time-varying signal into another. Can a network of units coupled by dynamic synapses implement the filter Q? We tested the approximation capabilities of a rather small dynamic network with just 10 hidden units (5 excitatory and 5 inhibitory ones), and one output (Fig. 3A). The dynamics of inhibitory synapses is described by the same model as that for excitatory synapses. For any particular temporal pattern applied at the input and any particular choice of the synaptic parameters, this network generates a temporal pattern as output. This output can be thought of, for example, as the activity of a particular population of neurons in the cortex, and the target function as the time series generated for the same input by some unknown quadratic filter Q. The synaptic parameters Wij, D ij , Fij and Uij are chosen so that, for each input in the training set, the network minimized the mean-square error E[z, zQ] = ~ z=;=-oI(Z(t) - ZQ(t))2 between its output z(t) and the desired output zQ(t) specified by the filter Q. To achieve this minimization, we used a conjugate gradient algorithm. l The training inputs were random signals, an example of which is shown in Fig. 3B. The test inputs were drawn from the same random distribution as the training inputs, but were not actually used during training. This test of generalization ensured that the observed performance represented more than simple "memorization" of the training set. Fig. 3C compares the network performance before and after training. Prior to training, the output is nearly flat, while after training the network output tracks the filter output closely (E[z,zQ] = 0.0032). Fig. 3D shows the performance after training for different randomly chosen quadratic filters Q E Qm for m = 4, ... ,16. Even for larger values of m the relatively small network with 10 hidden units performs rather well. Note that a quadratic filter of dimension m has m(m + 1)/2 free parameters, whereas the dynamic network has a constant number of 80 adjustable parameters. This shows clearly that dynamic synapses enable a small network to mimic a wide range of possible quadratic target filters. 1 In order to apply such a conjugate gradient algorithm ones has to calculate the partial derivatives Ii E [z zQ ] Ii E[z zQ] Ii E[z zQ] Ii E[z zQ ] ( •. ) . . Ii u'· . , Ii n'· . , Ii F: . and Ii w.. for all synapses ~J ill the network. For more detaIls l.J '&J 'l.J 'I., about conjugate gradient algorithms see e.g. [5]. A c o -020 50 100 150 200 time steps B D O.B 0·20 50 100 150 200 o 4 6 time steps B 10 12 14 16 m Figure 3: A network with units coupled by dynamic synapses can approximate randomly drawn quadratic filters. A Network architecture. The network had one input unit, 10 hidden units (5 excitatory, 5 inhibitory), and one output unit, see Fig. 2 for details. B One of the input patterns used in the training ensemble. For clarity, only a portion of the actual input is shown. C Output of the network prior to training, with random initialization of the parameters, and the output of the dynamic network after learning. The target was the output of a quadratic filter Q E QlQ. The filter coefficients hkl (1 :::; k, l :::; 10) were generated randomly by subtracting J-t/2 from a random number generated from an exponential distribution with mean J-t = 3. D Performance after network training. For different sizes of HQ (HQ is a symmetric m x m matrix) we plotted the average performance (mse measured on a test set) over 20 different filters Q, i.e. 20 randomJy generated matrices HQ. 3 Comparison with the model of Back and Tsoi Our dynamic network model is not the first to incorporate temporal dynamics via dynamic synapses. Perhaps the earliest suggestion for a role for synaptic dynamics in network computation was by [7]. More recently, a number of networks have been proposed in which synapses implemented linear filters; in particular [6]. To assess the performance of our network model in relation to the model proposed in [6] we have analyzed the performance of our dynamic network model for the same system identification task that was employed as benchmark task in [6]. The goal of this task is to learn a filter F with (Fx)(t) = sin(u(t)) where u(t) is the output of a linear filter applied to the input time series X(t).2 The result is summarized in Fig. 4. It can clearly be seen that our network model (see Fig. 3A for the network architecture) is able to learn this particular filter. The mean square error (mse) on the test data is 0.0010, which is slightly smaller than the mse of 0.0013 reported in [6]. Note that the network Back and Tsoi used to learn the task had 130 adjustable parameters (13 parameters per IIR synapse, 10 hidden units) whereas our network model had only 80 adjustable parameters (all parameters U ij , F ij , Dij and W ij were adjusted during learning). 2U(t) is the solution to the difference equation u(t)-1.99u(t-1)+ 1.572u(t-21)-0.4583u(t31) = O.0154x(t) + O.0462x(t - 1) + O.0462x(t - 2 1) + O.0154x(t - 31). Hence, u(t) is the output of a linear filter applied to the input x(t). A 2 B C I~~~ .~ 0 0.0 0.5 1.0 1.5 D - 1 I DN I -20 I 100 150 150 = ST 50 200 200 time 50 100 150 Figure 4: Performance of our model on the system identification task used in [6]. The network architecture is the same as in Fig. 3. A One of the input patterns used in the training ensemble. B Output of the network after learning and the target. C Comparison of the mean square error (in units of to- 3) achieved on test data by the model of Back and Tsoi (BT) and by the dynamic network (DN). D Comparison of the number of adjustable parameters. The network model of Back and Tsoi (BT) utilizes slightly more adjustable parameters than the dynamic network (DN). A 1 and 2-tuples W _!.!.!. ____ .. _____ 1 _____ .... ___ _ U. !- !.!_ DilL. , _ ~. I--- - - r- -F. _ • W U D F B3-tuples . w/oF . w/oD . w/oU . w/oW C 1 and 2-tuples '.'.' W . : : :---- ... -----1----- 1- ---U. : · :.: • , , , __ _ _ l __ _ __ I ___ _ _ L __ _ _ D. ,-i-i-- - - , - - I- - - - - r - - F _ ' • '. ' W U D F F D U W Figure 5: Impact of different synaptic parameters on the learning capabilities of a dynamic network. The size of a square (the "impact") is proportional to the inverse of the mean squared error averaged over N trials. A In each trial (N = 100) a different quadratic filter matrix HQ (m = 6) was randomly generated as described in Fig. 3. Along the diagonal one can see the impact of a single parameter, whereas the off-diagonal elements (which are symmetric) represent the impact of changing pairs of parameters. B The impact of subsets of size three is shown where the labels indicate which parameter is not included. C Same interpretation as for panel A but the results shown (N = 20) are for the filter used in [6]. D Same interpretation as for panel B but the results shown (N = 20) are for the same filter as in panel C. This shows that a very simple feedforward network with biologically realistic synaptic dynamics yields performance comparable to that of artificial networks that were previously designed to yield good performance in the time series domain without any claims of biological realism. 4 Which Parameters Matter? It remains an open experimental question which synaptic parameters are subject to usedependent plasticity, and under what conditions. For example, long term potentiation appears to change synaptic dynamics between pairs of layer 5 cortical neurons [8] but not in the hippocampus [9]. We therefore wondered whether plasticity in the synaptic dynamics is essential for a dynamic network to be able to learn a particular target filter. To address this question, we compared network performance when different parameter subsets were optimized using the conjugate gradient algorithm, while the other parameters were held fixed. In all experiments, the fixed parameters were chosen to ensure heterogeneity in presynaptic dynamics. Fig. 5 shows that changing only the postsynaptic parameter W has comparable impact to changing only the presynaptic parameters U or D, whereas changing only F has little impact on the dynamics of these networks (see diagonal of Fig. 5A and Fig. 5C). However, to achieve good performance one has to change at least two different types of parameters such as {W, U} or {W, D} (all other pairs yield worse performance). Hence, neither plasticity in the presynaptic dynamics (U, D, F) alone nor plasticity of the postsynaptic efficacy (W) alone was sufficient to achieve good performance in this model. 5 A Universal Approximation Theorem for Dynamic Networks In the preceding sections we had presented empirical evidence for the approximation capabilities of our dynamic network model for computations in the time series domain. This gives rise to the question, what the theoretical limits of their approximation capabilities are. The rigorous theoretical result presented in this section shows that basically there are no significant a priori limits. Furthermore, in spite of the rather complicated system of equations that defines dynamic networks, one can give a precise mathematical characterization of the class of filters that can be approximated by them. This characterization involves the following basic concepts. An arbitrary filter F is called time invariant if a shift of the input functions by a constant to just causes a shift of the output function by the same constant to. Another essential property of filters is fading memory. A filter F has fading memory if and only if the value of F;f(O) can be approximated arbitrarily closely by the value of F~(O) for functions ~ that approximate the functions ;f for sufficiently long bounded intervals [-T, 0]. Interesting examples of linear and nonlinear time invariant filters with fading memory can be generated with the help of representations of the form (Fx)(t) = Iooo ... Iooo x(t - Tt) ..... x(t - Tk)hh, . .. ,Tk)dTl ... dTk for measurable and essentially bounded functions x : R -+ R (with hELl). One refers to such an integral as a Volterra term of order k. Note that for k = 1 it yields the usual representation for a linear time invariant filter. The class of filters that can be represented by Volterra series, i.e., by finite or infinite sums of Volterra terms of arbitrary order, has been investigated for quite some time in neurobiology and engineering. Theorem 1 Assume that X is the class of functions from R into [Bo, B l ] which satisfy Ix(t) - x(s)1 ~ B2 ·It - sl for all t,s E ffi, where B o,Bl ,B2 are arbitrary real-valued constants with 0 < Bo < Bl and 0 < B 2. Let F be an arbitrary filter that maps vectors of functions ;f = (Xl, ... ,xn) E xn into functions from R into ~ Then the following are equivalent: (a) F can be approximated by dynamic networks' N defined in Fig. 2 (i.e., for any € > 0 there exists such network N such that I (F;f)(t) - (N ;f)(t) I < € for all ;f E xn and all t E R) (b) F can be approximated by dynamic networks (see Fig. 2) with just a single layer of sigmoidal neurons ( c) F is time invariant and has fading memory (d) F can be approximated by a sequence of (finite or infinite) Volterra series. The proof of Theorem 1 relies on the Stone-Weierstrass Theorem, and is contained as the proof of Theorem 3.4 in [10]. The universal approximation result contained in Theorem 1 turns out to be rather robust with regard to changes in the definition of a dynamic network. Dynamic networks with just one layer of dynamic synapses and one subsequent layer of sigmoidal gates can approximate the same class of filters as dynamic networks with an arbitrary number of layers of dynamic synapses and sigmoidal neurons. It can also be shown that Theorem 1 remains valid if one considers networks which have depressing synapses only or if one uses the model for synaptic dynamics proposed in [1]. 6 Discussion Our central hypothesis is that rapid changes in synaptic strength, mediated by mechanisms such as facilitation and depression, are an integral part of neural processing. We have analyzed the computational power of such dynamic networks, which represent a new paradigm for neural computation on time series that is based on biologically realistic models for synaptic dynamics [11]. Our analytical results show that the class of nonlinear filters that can be approximated by dynamic networks, even with just a single hidden layer of sigmoidal neurons, is remarkably rich. It contains every time invariant filter with fading memory, hence arguable every filter that is potentially useful for a biological organism. The computer simulations we performed show that rather small dynamic networks are not only able to perform interesting computations on time series, but their performance is comparable to that of previously considered artificial neural networks that were designed for the purpose of yielding efficient processing of temporal signals. We have tested dynamic networks on tasks such as the learning of a randomly chosen quadratic filter, as well as on the learning task used in [6], to illustrate the potential of this architecture. References [1] J. A. Varela, K. Sen, 1. Gibson, J. Fost, L. F. Abbott, and S. B. Nelson. A quantitative description of short-term plasticity at excitatory synapses in layer 2/3 of rat primary visual cortex. J. Neurosci, 17:220-4, 1997. [2] M.Y. Tsodyks, K. Pawelzik, and H. Markram. Neural networks with dynamic synapses. Neural Computation, 10:821-835, 1998. [3] H. Markram, Y. Wang, and M. Tsodyks. Differential signaling via the same axon of neocortical pyramidal neurons. PNAS,95:5323-5328, 1998. [4] R.C. deCharms and M.M. Merzenich. Optimizing sound features for cortical neurons. Science, 280:1439-43, 1998. [5] John Hertz, Anders Krogh, and Richard Palmer. Introduction to the Theory oj Neural Computation. Addison-Wesley, 1991. [6] A. D. Back and A. C. Tsoi. A simplified gradient algorithm for 1IR synapse multilayer perceptrons. Neural Computation, 5:456-462, 1993. [7] W.A. Little and G.L. Shaw. A statistical theory of short and long term memory. Behavioural Biology, 14:115-33, 1975. [8] H. Markram and M. Tsodyks. Redistribution of synaptic efficacy between neocortical pyramidal neurons. Nature, 382:807-10, 1996. [9] D.K. Selig, R.A. Nicoll, and R.C. Malenka. Hippocampal long-term potentiation preserves the fidelity of postsynaptic responses to presynaptic bursts. J. Neurosci. , 19:1236-46, 1999. [10] W. Maass and E. D. Sontag. Neural systems as nonlinear filters. Neural Computation, 12(8):1743-1772,2000. [11] A. M. Zador. The basic unit of computation. Nature Neuroscience, 3(Supp):1167, 2000.
2000
82
1,886
Machine Learning for Video-Based Rendering Arno Schadl arno@schoedl. org Irfan Essa irjan@cc.gatech.edu Georgia Institute of Technology GVU Center / College of Computing Atlanta, GA 30332-0280, USA. Abstract We present techniques for rendering and animation of realistic scenes by analyzing and training on short video sequences. This work extends the new paradigm for computer animation, video textures, which uses recorded video to generate novel animations by replaying the video samples in a new order. Here we concentrate on video sprites, which are a special type of video texture. In video sprites, instead of storing whole images, the object of interest is separated from the background and the video samples are stored as a sequence of alpha-matted sprites with associated velocity information. They can be rendered anywhere on the screen to create a novel animation of the object. We present methods to create such animations by finding a sequence of sprite samples that is both visually smooth and follows a desired path. To estimate visual smoothness, we train a linear classifier to estimate visual similarity between video samples. If the motion path is known in advance, we use beam search to find a good sample sequence. We can specify the motion interactively by precomputing the sequence cost function using Q-Iearning. 1 Introduction Computer animation of realistic characters requires an explicitly defined model with control parameters. The animator defines keyframes for these parameters, which are interpolated to generate the animation. Both the model generation and the motion parameter adjustment are often manual, costly tasks. Recently, researchers in computer graphics and computer vision have proposed efficient methods to generate novel views by analyzing captured images. These techniques, called image-based r'endering, require minimal user interaction and allow photorealistic synthesis of still scenes[3]. In [7] we introduced a new paradigm for image synthesis, which we call video textures. In that paper, we extended the paradigm of image-based rendering into video-based rendering, generating novel animations from video. A video texture transitions~_---__ Figure 1: An animation is created from reordered video sprite samples. Transitions between samples that are played out of the original order must be visually smooth. turns a finite duration video into a continuous infinitely varying stream of images. We treat the video sequence as a collection of image samples, from which we automatically select suitable sequences to form the new animation. Instead of using the image as a whole, we can also record an object against a bluescreen and separate it from the background using background-subtraction. We store the created opacity image (alpha channel) and the motion of the object for every sample. We can then render the object at arbitrary image locations to generate animations, as shown in Figure 1. We call this special type of video texture a video sprite. A complete description of the video textures paradigm and techniques to generate video textures is presented in [7]. In this paper, we address the controlled animation of video sprites. To generate video textures or video sprites, we have to optimize the sequence of samples so that the resulting animation looks continuous and smooth, even if the samples are not played in their original order. This optimization requires a visual similarity metric between sprite images, which has to be as close as possible to the human perception of similarity. The simple L2 image distance used in [7] gives poor results for our example video sprite, a fish swimming in a tank. In Section 2 we describe how to improve the similarity metric by training a classifier on manually labeled data [1]. Video sprites usually require some form of motion control. We present two t echniques to control the sprite motion while preserving the visual smoothness of the sequence. In Section 3 we compute a good sequence of samples for a motion path scripted in advance. Since the number of possible sequences is too large to explore exhaustively, we use beam search to make the optimization manageable. For applications like computer games, we would like to control the motion of the sprite interactively. We achieve this goal using a t echnique similar to Q-learning, as described in Section 4. 1.1 Previous work Before the advent of 3D graphics, the idea of creating animations by sequencing 2D sprites showing different poses and actions was widely used in computer games. Almost all characters in fighting and jump-and-run games are animated in this fashion. Game artists had to generate all these animations manually. Figure 2: Relationship between image similarities and transitions. There is very little earlier work in research on automatically sequencing 2D views for animation. Video Rewrite [2] is the work most closely related to video textures. It creates lip motion for a new audio track from a training video of the subject speaking by replaying short subsequences of the training video fitting best to the sequence of phonemes. To our knowledge, nobody has automatically generated an object animation from video thus far. Of course, we are not the first applying learning techniques to animation. The NeuroAnimator [4], for example, uses a neural network to simulate a physics-based model. Neural networks have also been used to improve visual similarity classification [6]. 2 Training the similarity metric Video textures reorder the original video samples into a new sequence. If the sequence of samples is not the original order, we have to insure that transitions between samples that are out of order are visually smooth. More precisely, in a transition from sample i to j, we substitute the successor of sample i by sample j and the predecessor of sample j by sample i. So sample i should be similar to sample j - 1 and sample i + 1 should be similar to sample j (Figure 2). The distance function Dij between two samples i and j should be small if we can substitute one image for the other without a noticeable discontinuity or "jump". The simple L2 image distance used in [7] gives poor results for the fish sprite, because it fails to capture important information like the orientation of the fish. Instead of trying to code this information into our system, we train a linear classifier from manually labeled training data. The classifier is based on six features extracted from a sprite image pair: • difference in velocity magnitude, • difference in velocity direction, measured in angle, • sum of color L2 differences, weighted by the minimum of the two pixel alpha values, • sum of absolute differences in the alpha channel, • difference in average color, • difference in blob area, computed as the sum of all alpha values. The manual labels for a sprite pair are binary: visually acceptable or unacceptable. To create the labels, we guess a rough estimator and then manually correct the classification of this estimator. Since it is more important to avoid visual glitches than to exploit every possible transition, we penalize false-positives 10 times higher than false-negatives in our training. segment boundary,: I I k Figure 3: The components of the path cost function. All sprite pairs that the classifier rejected are no longer considered for transitions. If the pair of samples i and j is kept, we use the value of the linear classifying function as a measure for visual difference D ij . The pairs i, j with i = j are treated just as any other pair, but of course they have minimal visual difference. The cost for a transition Tij from sample i to sample j is then T ij = ~Di , j - l + ~Di+ l ,j . 3 Motion path scripting A common approach in animation is to specify all constraints before rendering the animation [8]. In this section we describe how to generate a good sequence of sprites from a specified motion path, given as a series of line segments. We specify a cost function for a given path, and starting at the beginning of the first segment, we explore the tree of possible transitions and find the path of least cost. 3.1 Sequence cost function The total cost function is a sum of per-frame costs. For every new sequence frame, in addition to the transition cost, as discussed in the previous section, we penalize any deviation from the defined path and movement direction. We only constrain the motion path, not the velocity magnitude or the motion timing because the fewer constraints we impose, the bett er the chance of finding a smooth sequence using the limited number of available video samples. The path is composed of line segments and we keep track of the line segment that the sprite is currently expected to follow. We compute the error function only with respect to this line segment. As soon as the orthogonal projection of the sprite position onto the segment passes the end of the current segment, we switch to the next segment. This avoids the ambiguity of which line segment to follow when paths are self-intersecting. We define an animation sequence (iI, PI, h), (i2,P2, I2) ... (iN ,PN , IN) where ik, 1 ::; k ::; N, is the sample shown in frame k, Pk is the position at which it is shown, and Ik is the line segment that it has to follow. Let d(Pk' Id be the distance from point Pk to line Ik' V(ik) the estimated velocity of the sprite at sample ik, and L(v(ik), Ik) is the angle between the velocity vector and the line segment. The cost function C for the frame k from this sequence is then (1) where WI and W2 are user-defined weights that trade off visual smoothness against the motion constraints. 3.2 Sequence tree search We seed our search with all possible starting samples and set the sprite position to the starting position of the first line segment. For every sequence, we store the total cost up to the current end of the path, the current position of the sprite, the current sample and the current line segment. Since from any given video sample there can be many possible transitions and it is impossible to explore the whole tree, we employ beam search to prune the set of sequences after advancing the tree depth by one transition. At every depth we keep the 50000 sequences with least accumulated cost. When the sprite reaches the end of the last segment, the sequence with lowest total cost is chosen. Section 5 describes the running time of the algorithm. 4 Interactive motion control For interactive applications like computer games, video sprites allow us to generate high-quality graphics without the computational burden of high-end modeling and rendering. In this section we show how to control video sprite motion interactively without time-consuming optimization over a planned path. The following observation allows us to compute the path tree in a much more efficient manner: If W2 in equation (1) is set to zero, the sprite does not adhere to a certain path but still moves in the desired general direction. If we assume the line segment is infinitely long, or in other words is indicating only a general motion direction l , equation (1) is independent of the position Pk of the sprite and only depends on the sample that is currently shown. We now have to find the lowest cost path through this set of states, a problem which is solved using Q-Iearning [5]: The cost Fij for a path starting at sample i transitioning to sample j is (2) In other words, the least possible cost, starting from sample i and going to sample j, is the cost of the transition from i to j plus the least possible cost of all paths starting from j. Since this recursion is infinite, we have to introduce a decay term o ~ (t ~ 1 to assure convergence. To solve equation (2), we initialize with Fij = Tij for all i and j and then iterate over the equation until convergence. 4.1 Interactive switching between cost functions We described above how to compute a good path for a given motion direction l. To interactively control the sprite, we precompute Fij for multiple motion directions, for example for the eight compass directions. The user can then interactively specify the motion direction by choosing one of the precomputed cost functions. Unfortunately, the cost function is precomputed to be optimal only for a certain motion direction, and does not take into account any switching between cost functions, which can cause discontinuous motion when the user changes direction. Note that switching to a motion path without any motion constraint (equation (2) with WI = 0) will never cause any additional discontinuities, because the smoothness constraint is the only one left. Thus, we solve our problem by precomputing a cost function that does not constrain the motion for a couple of transitions, and then starts to constrain the motion with the new motion direction. The response delay allows us to gracefully adjust to the new cost function. For every precomputed
2000
83
1,887
Discovering Hidden Variables: A Structure-Based Approach Gal Elidan Noam Lotner Nir Friedman Hebrew University {galel,noaml,nir}@cs.huji.ac.il Abstract Daphne Koller Stanford University koller@cs.stanford.edu A serious problem in learning probabilistic models is the presence of hidden variables. These variables are not observed, yet interact with several of the observed variables. As such, they induce seemingly complex dependencies among the latter. In recent years, much attention has been devoted to the development of algorithms for learning parameters, and in some cases structure, in the presence of hidden variables. In this paper, we address the related problem of detecting hidden variables that interact with the observed variables. This problem is of interest both for improving our understanding of the domain and as a preliminary step that guides the learning procedure towards promising models. A very natural approach is to search for "structural signatures" of hidden variables substructures in the learned network that tend to suggest the presence of a hidden variable. We make this basic idea concrete, and show how to integrate it with structure-search algorithms. We evaluate this method on several synthetic and real-life datasets, and show that it performs surprisingly well. 1 Introduction In the last decade there has been a great deal of research focused on the problem of learning Bayesian networks (BNs) from data (e.g., [7]). An important issue is the existence of hidden variables that are never observed, yet interact with observed variables. Naively, one might think that, if a variable is never observed, we can simply ignore its existence. At a certain level, this intuition is correct. We can construct a network over the observable variables which is an I-map for the marginal distribution over these variables, i.e., captures all the dependencies among the observed variables. However, this approach is weak from a variety of perspectives. Consider, for example, the network in Figure lea). Assume that the data is generated from such a dependency model, but that the node H is hidden. A minimal I-map for the marginal distribution is shown in Figure l(b). From a pure representation perspective, this network is clearly less useful. It contains 12 edges rather than 6, and the nodes have much bigger families. Hence, as a representation of the process in the domain, it is much less meaningful. From the perspective of learning these networks from data, the marginalized network has significant disadvantages. Assuming all the variables are binary, it uses 59 parameters rather than 17, leading to substantial data fragmentation and thereby to nonrobust parameter estimates. Moreover, with limited amounts of data the induced network will usually omit several of the dependencies in the model. When a hidden variable is known to exist, we can introduce it into the network and apply known BN learning algorithms. If the network structure is known, algorithms such as (a) with hidden variable (b) no hidden variable Figure 1: Hidden variable simplifies structure EM [3, 9] or gradient ascent [2] can learn parameters. If the structure is not known, the Structural EM (SEM) algorithm of [4] can be used to perform structure learning with missing data. However, we cannot simply introduce a "floating" hidden variable and expect SEM to place it correctly. Hence, both of these algorithms assume that some other mechanism introduces the hidden variable in approximately the right location in the network. Somewhat surprisingly, only little work has been done on the problem of automatically detecting that a hidden variable might be present in a certain position in the network. In this paper, we investigate what is arguably the most straightforward approach for inducing the existence of a hidden variable. This approach, briefly mentioned in [7], is roughly as follows: We begin by using standard Bayesian model selection algorithms to learn a structure over the observable variables. We then search the structure for substructures, which we call semi-cliques, that seem as if they might be induced by a hidden variable. We temporarily introduce the hidden variable in a way that breaks up the clique, and then continue learning based on that new structure. If the resulting structure has a better score, we keep the hidden variable. Surprisingly, this very basic technique does not seem to have been pursued. (The approach of [10] is similar on the surface, but is actually quite different; see Section 5.) We provide a concrete and efficient instantiation of this approach and show how to integrate it with existing learning algorithms such as SEM. We apply our approach to several synthetic and real datasets, and show that it often provides a good initial placement for the introduced hidden variable. We can therefore use it as a preprocessing step for SEM, substantially reducing the SEM search space. 2 Learning Structure of Bayesian Networks Consider a finite set X = {Xl, ... ,Xn } of discrete random variables where each variable Xi may take on values from a finite set. A Bayesian network is an annotated directed acyclic graph that encodes a joint probability distribution over X. The nodes of the graph correspond to the random variables Xl, ... , X n. Each node is annotated with a conditional probability distribution that represents P(Xi I Pa(Xi)), where Pa(Xi) denotes the parents of Xi in G. A Bayesian network B specifies a unique joint probability distribution over X given by: PB(X1 , .•. ,Xn ) = n~=l PB(XiIPa(Xi)). The problem of learning a Bayesian network can be stated as follows. Given a training set D = {x[I], ... , x[ M]} of instances of X, find a network B that best matches D. The common approach to this problem is to introduce a scoring function that evaluates each network with respect to the training data, and then to search for the best network according to this score. The scoring function most commonly used to learn Bayesian networks is the Bayesian scoring metric [8]. Given a scoring function, the structure learning task reduces to a problem of searching over the combinatorial space of structures for the structure that maximizes the score. The standard approach is to use a local search procedure that changes one arc at a time. Greedy hill-climbing with random restarts is typically used. The problem of learning in the presence of partially observable data (or known hidden variables) is computationally and conceptually much harder. In the case of a fixed network structure, the Expectation Maximization (EM) algorithm of [3] can be used to search for a (local) maximum likelihood (or maximum a posteriori) assignment to the parameters. The structural EM algorithm of [4] extends this idea to the realm of structure search. Roughly speaking, the algorithm uses an E-step as part of structure search. The current model structure as well as parameters is used for computing expected sufficient statistics for other candidate structures. The candidate structures are scored based on these expected sufficient statistics. The search algorithm then moves to a new candidate structure. We can then run EM again, for our new structure, to get the desired expected sufficient statistics. 3 Detecting Hidden Variables We motivate our approach for detecting hidden variables by considering the simple example discussed in the introduction. Consider the distribution represented by the network shown in Figure l(a), where H is a hidden variable. The variable H was the keystone for the conditional independence assumptions in this network. As a consequence, the marginal distribution over the remaining variables has almost no structure: each }j depends on all the Xi'S, and the }j's themselves are also fully connected. A minimal I-map for this distribution is shown in Figure l(b). It contains 12 edges compared to the original 6. We can show that this phenomenon is a typical effect of removing a hidden variables: Proposition 3.1: Let G be a network over the variables Xl, . .. ,Xn , H. Let I be the conditional independence statements statements of the form J(X; Y 1 Z) that are implied by G and do not involve H. Let G' be the graph over X I, ... , X n that contains an edge from Xi to X j whenever G contains such an edge, and in addition: G' contains a clique over the children}j of H , and G' contains an edge from any parent Xi of H to any child}j of H. Then G' is a minimall-map for I. We want to define a procedure that will suggest candidate hidden variables by finding structures of this type in the context of a learning algorithm. We will apply our procedure to networks induced by standard structure learning algorithms [7]. Clearly, it is unreasonable to hope that there is an exact mapping between substructures that have the form described in Proposition 3.1 and hidden variables. Learned networks are rarely an exact reflection of the minimal I-map for the underlying distribution. We therefore use a somewhat more flexible definition, which allows us to detect potential hidden variables. For a node X and a set of nodes Y, we define 6. (X ; Y) to be the set of neighbors of X (parents or children) within the subset Y. We define a semi-clique to be a set of nodes Q where each node X E Q is linked to at least half of Q: 16.(X; Q)I 2:: ~IQI (This revised definition is the strictest criterion that still accepts a minimally (just one neighbor missing) relaxed 4-Clique.) We propose a simple heuristic for finding semi-cliques in the graph. We first observe that each semi-clique must contain a seed which is easy to spot; this seed is a 3-vertex clique. Proposition 3.2: Any semi-clique of size 4 or more contains a clique ofsize 3. The first phase of the algorithm is a search for all 3-cliques in the graph. The algorithm then tries to expand each of them into a maximal semi-clique in a greedy way. More precisely, at each iteration the algorithm attempts to add a node to the "current" semi-clique. If the expanded set satisfies the semi-clique property, then it is set as the new "current" clique. These tests are repeated until no additional variable can be added to the semi-clique. The algorithm outputs the expansions found based on the different 3-clique "seeds". We note that this greedy procedure does not find all semi-cliques. The exceptions are typically two semi-cliques that are joined by a small number of edges, making a larger legal semiclique. These cases are of less interest to us, because they are less likely to arise from the marginalization of a hidden variable. In the second phase, we convert each of the semi-cliques to a structure candidate containing a new hidden node. Suppose Q is a semi-clique. Our construction introduces a new variable H, and replaces all of the incoming edges into variables in Q by edges from H. Parents of nodes in Q are then made to be parents of H, unless the edge results in a cycle. This process results in the removal of all intra-clique edges and makes H a proxy for all "outside" influences on the nodes in the clique. In the third phase, we evaluate each of these candidate structures in attempt to find the most useful hidden variable. There are several possible ways in which this candidate can be utilized by the learning algorithm. We propose three approaches. The simplest assumes that the network structure, after the introduction of the hidden variable, is fixed. In other words, we assume that the "true" structure of the network is indeed the result of applying our transformation to the input network (which was produced by the first stage of learning). We can then simply fit the parameters using EM, and score the resulting network. We can improve this idea substantially by noting that our simple transformation of the semi-clique does not typically recover the true underlying structure of the original model. In our construction, we chose to make the hidden variable H the parent of all the nodes in the semi-clique, and eliminate all other incoming edges to variables in the clique. Clearly, this construction is very limited. There might well be cases where some of the edges in the clique are warranted even in the presence of the hidden variable. It might also be the case that some of the edges from H to the semi-clique variables should be reversed. Finally, it is plausible that some nodes were included in the semi-clique accidentally, and should not be directly correlated with H . We could therefore allow the learning algorithm the SEM algorithm of [4] to adapt the structure after the hidden variable is introduced. One approach is to use SEM to fine-tune our model for the part of the network we just changed: the variables in the semi-clique and the new hidden variable. Therefore, in the second approach we fix the remaining structure, and consider only adaptations of the edges within this set of variables. This restriction substantially reduces the search space for the SEM algorithm. The third approach allows full structural adaptation over the entire network. This offers the SEM algorithm greater flexibility, but is computationally more expensive. To summarize our approach: In the first phase we analyze the network learned using conventional structure search to find semi-cliques that indicate potential locations of hidden variables. In the second phase we convert these semi-cliques into structure candidates (each containing a new hidden variable). Finally, in the third phase we evaluate each of these structures (possibly using them as a seed for further search) and return the best scoring network we find. The main assumption of our approach is that we can find "structural signatures" of hidden variables via semi-cliques. As we discussed above, it is unrealistic to expect the learned network G to have exactly the structure described in Proposition 3.1. On the one hand, learned networks often have spurious edges resulting from statistical noise, which might cause fragments of the network to resemble these structures even if no hidden variable is involved. On the other hand, there might be edges that are missing or reversed. Spurious edges are less problematic. At worst, they will lead us to propose a spurious hidden variable which will be eliminated by the subsequent evaluation step. Our definition of semi-clique, with its more flexible structure, partially deals with the problem of missing edges. However, if our data is very sparse, so that standard learning algorithms will be very reluctant to produce clusters with many edges, the approach we propose will not work. 4 Experimental Results Our aim is to evaluate the success of our procedure in detecting hidden variables. To do so, we evaluated our procedure on both synthetic and real-life data sets. The synthetic data sets were sampled from Bayesian networks that appear in the literature. We then created a training set in which we "hid" one variable. We chose to hide variables that are "central" in the network (i.e., variables that are the parents of several children). The synthetic data sets allow for a controlled evaluation, and for generating training and testing data sets of any desired size. However, the data is generated from a distribution that indeed has only a single hidden variable. A more realistic benchmark is real data, that may contain many confounding influences. In this case, of course, we do not have a generating model to compare against. Insurance: A 27-node network developed to evaluate driver's insurance applications [2]. We hid the variables Accident, Age, MakeModel, and VehicleYear (A, G, M, V in Figure 2). Alarm: A 37-node network [1] developed to monitor ICU patients. We hid the variables HR, intubation, LVFailure, and VentLung (H, I, L, V in Figure 2). Stock Data: 'C o ,g.l!! = CO QI'C .11:= til Cl,S 0 .... ..JO .l!! CO 'C s::: Cl o s::: QI .... s::: o .o ~ 11)1bJ 60081200[] 600 <i> <i> <i> <i> <i> <i> <i> 400 <i> 400 800 200 <i> ' ¢ 200 0" 400 o ' . -200 r:1 ..p + + . . -400?' 0 -200 AGMV HILV HIL 20~bJ 4008 1000E]' -200" 200& 0+ + <i> . -400 0 ' . D -1000 -600 ¢ ¢ -200 A ¢ ¢ '" -2000 ¢ -BOO -400 . AGMV HILV HIL Insurance 1k Alarm 1k Alarm 10k 200~ 150 100 50 D o SI TB Original 0 Hidden + 200 + G Naive [!] 150 100 'iEl 50 o -50 SI TB Figure 2: Comparison of the different approaches. Each point in the graph corresponds to a network learned by one of the methods. The graphs on the bottom row show the log of the Bayesian score. The graphs on the top row show log-likelihood of an independent test set. In all graphs, the scale is normalized to the performance of the No-hidden network, shown by the dashed line at "0". A real-life dataset that traces the daily change of 20 major US technology stocks for several years (1516 trading days). These values were discretized to three categories: "up", "no change", and "down". TB: A real-life dataset that records information about 2302 tuberculosis patients in the San Francisco county (courtesy of Dr. Peter Small, Stanford Medical Center). The data set contains demographic information such as gender, age, ethnic group, and medical information such as HIV status, TB infection type, and other test results. In each data set, we applied our procedure as follows. First, we used a standard model selection procedure to learn a network from the training data (without any hidden variables). In our implementation, we used standard greedy hill-climbing search that stops when it reaches a plateau it cannot escape. We supplied the learned network as input to the cliquedetecting algorithm which returned a set of candidate hidden variables. We then used each candidate as the starting point for a new learning phase. The Hidden procedure returns the highest-scoring network that results from evaluating the different putative hidden variables. To gauge the quality of this learning procedure, we compared it to two "strawmen" approaches. The Naive strawman [4] initializes the learning with a network that has a single hidden variable as parent of all the observed variables. It then applies SEM to get an improved network. This process is repeated several times, where each time a random perturbation (e.g., edge addition) is applied to help SEM to escape local maxima. The Original strawman, which applied only in synthetic data set, is to use the true generating network on the data set. That is, we take the original network (that contains the variable we hid) and use standard parametric EM to learn parameters for it. This strawman corresponds to cases where the learner has additional prior knowledge about domain structure. We quantitatively evaluated each of these networks in two ways. First, we computed the Bayesian score of each network on the training data. Second, we computed the logarithmic loss of predictions made by these networks on independent test data. The results are shwon in Figure 2. In this evaluation, we used the performance of No-Hidden as the baseline for comparing the other methods. Thus, a positive score of say 100 in Figure 2 indicates a score which is larger by 100 than the score of No-Hidden. Since scores are the logarithm of the Bayesian posterior probability of structures (up to a constant), this implies that such a structure is 2100 times more probable than the structure found by No-Hidden. We can see that, in most cases, the network learned by Hidden outperforms the network learned by No-hidden. In the artificial data sets, Original significantly outperforms our algorithm on test data. This is no surprise: Original has complete knowledge of the structure which generated the test data. Our algorithm can only evaluate networks according to their score; indeed, the scores of the networks found by Hidden are better than those of Original in 12 out of 13 cases tested. Thus, we see that the "correct" structure does not usually have the highest Bayesian score. Our approach usually outperforms the network learned by Naive. This improvement is particularly significant in the real-life datasets. As discussed in Section 3, there are three ways that a learning algorithm can utilize the original structure proposed by our algorithm. As our goal was to find the best model for the domain, we ran all three of them in each case, and chose the best resulting network. In all of our experiments, the variant that fixed the candidate structure and learned parameters for it resulted in scores that were significantly worse than the networks found by the variants that employed structure search. The networks trained by this variant also performed much worse on test data. This highlights the importance of structure search in evaluating a potential hidden variable. The initial structure candidate is often too simplified; on the one hand, it forces too many independencies among the variables in the semi-clique, and on the other, it can add too many parents to the new hidden variable. The comparison between the two variants that use search is more complex. In many cases, the variant that gives the SEM complete flexibility in adapting the network structure did not find a better scoring network than the variant that only searches for edges in the area of the new variable. In the cases it did lead to improvement, the difference in score was not significantly larger. Since the variant that restricts SEM is computationally cheaper (often by an order of magnitude), we believe that it provides a good tradeoff between model quality and computational cost. The structures found by our procedure are quite appealing. For example, in the stock market data, our procedure constructs a hidden variable that is the parent of several stocks: Microsoft, Intel, Dell, CISCO, and Yahoo. A plausible interpretation of this variable is "strong" market vs. "stationary" market. When the hidden variable has the "strong" value, all the stocks have higher probability for going up. When the hidden variable has the "stationary" probability, these stocks have much higher probability of being in the "no change" value. We do note that in the learned networks there were still many edges between the individual stocks. Thus, the hidden variable serves as a general market trend, while the additional edges make better description of the correlations between individual stocks. The model we learned for the TB patient dataset was also interesting. One value of the hidden variable captures two highly dominant segments of the population: older, HIV-negative, foreign-born Asians, and younger, HIV-positive, US-born blacks. The hidden variable's children distinguished between the two aggregated subpopulations using the HIV-result variable, which was also a parent of most of them. We believe that, had we allowed the hidden variable to have three values, it would have separated these populations. 5 Discussion and Future Work In this paper, we propose a simple and intuitive algorithm for finding plausible locations for hidden variables in BN learning. It attempts to detect structural signatures of a hidden variable in the network learned by standard structure search. We presented experiments showing that our approach is reasonably successful at producing better models. To our knowledge, this paper is also the first to provide systematic empirical tests of any approach to the task of discovering hidden variables. The problem of detecting hidden variables has received surprisingly little attention. Spirtes et at. [11] suggest an approach that detects patterns of conditional independencies that can only be generated in the presence of hidden variables. This approach suffers from two limitations. First, it is sensitive to failure in few of the multiple independence tests it uses. Second, it only detects hidden variables that are forced by the qualitative independence constraints. It cannot detect situations where the hidden variable provides a more succinct model of a distribution that can be described by a network without a hidden variable (as in the simple example of Figure 1). Martin and VanLehn [10] propose an alternative approach that appears, on the surface, to be similar to ours. They start by checking correlations between all pairs of variables. This results in a "dependency" graph in which there is an edge from X to Y if their correlation is above a predetermined threshold. Then they construct a two-layered network that contains independent hidden variables in the top level, and observables in the bottom layer, such that every dependency between two observed variables is "explained" by at least one common hidden parent. This approach suffers from three important drawbacks. First, it does not eliminate from consideration correlations that can be explained by direct edges among the observables. Thus, it forms clusters even in cases where the dependencies can be fully explained by a standard Bayesian network structure. Moreover, since it only examines pairwise dependencies, it cannot detect conditional independencies, such as X -+ Y -+ Z, from the data. (In this case, it would learn a hidden variable that is the parent of all three variables.) Finally, this approach learns a restricted form of networks that requires many hidden variables to represent dependencies among variables. Thus, it has limited utility in distinguishing "true" hidden variables from artifacts of the representation. We plan to test further enhancements to the algorithm in several directions. First, other possibilities for structural signatures (for example the structure resulting from a many parent - many children configuration) may expand the range of variables we can discover. Second, our clique-discovering procedure is based solely on the structure of the network learned. Additional information, such as the confidence of learned edges [6, 5], might help the procedure avoid spurious signatures. Third, we plan to experiment with multi-valued hidden variables and better heuristics for selecting candidates out of the different proposed networks. Finally, we are considering approaches for dealing with sparse data, when the structural signatures do not manifest. Information-theoretic measures might provide a more statistical signature for the presence of a hidden variable. Acknowledgements This work was supported in part by ISF grant 244/99, Israeli Ministry of Science grant 2008-1-99. Nir Friedman was supported by Alon fellowship, and by the generosity of the Sacher foundation. References [1] 1. Beinlich, G. Suermondt, R. Chavez, and G. Cooper. The ALARM monitoring system. In Proc. 2 'nd European Conf. on AI and Medicine. , 1989. [2] J. Binder, D. Koller, S. Russell, and K. Kanazawa. Adaptive probabilistic networks with hidden variables. Machine Learning, 29:213- 244, 1997. [3] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. J. Royal Stat. Soc., B 39:1- 39, 1977. [4] N. Friedman. The Bayesian structural EM algorithm. In UAJ, 1998. [5] N. Friedman and D. Koller. Being Bayesian about Network Structure. In UAI, 2000. [6] N. Friedman, M. Goldszmidt, and A. Wyner. Data analysis with Bayesian networks: A bootstrap approach. In UAJ, 1999. [7] D. Heckerman. A tutorial on learning with Bayesian networks. In Learning in Graphical Models. 1998. [8] D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20: 197- 243, 1995. [9] S. L. Lauritzen. The EM algorithm for graphical association models with missing data. Camp. Stat.and Data Ana., 19:191- 201,1995. [10] J. Martin and K. VanLehn. Discrete factor analysis: Learning hidden variables in Bayesian networks. Technical report, Department of Computer Science, University of Pittsburgh, 1995. [11] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction and Search. Springer-Verlag, 1993.
2000
84
1,888
Natural sound statistics and divisive normalization in the auditory system Odelia Schwartz Center for Neural Science New York University odelia@cns.nyu.edu Eero P. Simoncelli Howard Hughes Medical Institute Center for Neural Science, and Courant Institute of Mathematical Sciences New York University eero.simoncelli@nyu.edu Abstract We explore the statistical properties of natural sound stimuli preprocessed with a bank of linear filters. The responses of such filters exhibit a striking form of statistical dependency, in which the response variance of each filter grows with the response amplitude of filters tuned for nearby frequencies. These dependencies may be substantially reduced using an operation known as divisive normalization, in which the response of each filter is divided by a weighted sum of the rectified responses of other filters. The weights may be chosen to maximize the independence of the normalized responses for an ensemble of natural sounds. We demonstrate that the resulting model accounts for nonlinearities in the response characteristics of the auditory nerve, by comparing model simulations to electrophysiological recordings. In previous work (NIPS, 1998) we demonstrated that an analogous model derived from the statistics of natural images accounts for non-linear properties of neurons in primary visual cortex. Thus, divisive normalization appears to be a generic mechanism for eliminating a type of statistical dependency that is prevalent in natural signals of different modalities. Signals in the real world are highly structured. For example, natural sounds typically contain both harmonic and rythmic structure. It is reasonable to assume that biological auditory systems are designed to represent these structures in an efficient manner [e.g., 1,2]. Specifically, Barlow hypothesized that a role of early sensory processing is to remove redundancy in the sensory input, resulting in a set of neural responses that are statistically independent. Experimentally, one can test this hypothesis by examining the statistical properties of neural responses under natural stimulation conditions [e.g., 3,4], or the statistical dependency of pairs (or groups) of neural responses. Due to their technical difficulty, such multi-cellular experiments are only recently becoming possible, and the earliest reports in vision appear consistent with the hypothesis [e.g., 5]. An alternative approach, which we follow here, is to develop a neural model from the statistics of natural signals and show that response properties of this model are similar to those of biological sensory neurons. A number of researchers have derived linear filter models using statistical criterion. For visual images, this results in linear filters localized in frequency, orientation and phase [6, 7]. Similar work in audition has yielded filters localized in frequency and phase [8]. Although these linear models provide an important starting point for neural modeling, sensory neurons are highly nonlinear. In addition, the statistical properties of natural signals are too complex to expect a linear transformation to result in an independent set of components. Recent results indicate that nonlinear gain control plays an important role in neural processing. Ruderman and Bialek [9] have shown that division by a local estimate of standard deviation can increase the entropy of responses of center-surround filters to natural images. Such a model is consistent with the properties of neurons in the retina and lateral geniculate nucleus. Heeger and colleagues have shown that the nonlinear behaviors of neurons in primary visual cortex may be described using a form of gain control known as divisive normalization [10], in which the response of a linear kernel is rectified and divided by the sum of other rectified kernel responses and a constant. We have recently shown that the responses of oriented linear filters exhibit nonlinear statistical dependencies that may be substantially reduced using a variant of this model, in which the normalization signal is computed from a weighted sum of other rectified kernel responses [11, 12]. The resulting model, with weighting parameters determined from image statistics, accounts qualitatively for physiological nonlinearities observed in primary visual cortex. In this paper, we demonstrate that the responses of bandpass linear filters to natural sounds exhibit striking statistical dependencies, analogous to those found in visual images. A divisive normalization procedure can substantially remove these dependencies. We show that this model, with parameters optimized for a collection of natural sounds, can account for nonlinear behaviors of neurons at the level of the auditory nerve. Specifically, we show that: 1) the shape offrequency tuning curves varies with sound pressure level, even though the underlying linear filters are fixed; and 2) superposition of a non-optimal tone suppresses the response of a linear filter in a divisive fashion, and the amount of suppression depends on the distance between the frequency of the tone and the preferred frequency of the filter. 1 Empirical observations of natural sound statistics The basic statistical properties of natural sounds, as observed through a linear filter, have been previously documented by Attias [13]. In particular, he showed that, as with visual images, the spectral energy falls roughly according to a power law, and that the histograms of filter responses are more kurtotic than a Gaussian (i.e., they have a sharp peak at zero, and very long tails). Here we examine the joint statistical properties of a pair of linear filters tuned for nearby temporal frequencies. We choose a fixed set of filters that have been widely used in modeling the peripheral auditory system [14]. Figure 1 shows joint histograms of the instantaneous responses of a particular pair of linear filters to five different types of natural sound, and white noise. First note that the responses are approximately decorrelated: the expected value of the y-axis value is roughly zero for all values of the x-axis variable. The responses are not, however, statistically independent: the width of the distribution of responses of one filter increases with the response amplitude of the other filter. If the two responses were statistically independent, then the response of the first filter should not provide any information about the distribution of responses of the other filter. We have found that this type of variance dependency (sometimes accompanied by linear correlation) occurs in a wide range of natural sounds, ranging from animal sounds to music. We emphasize that this dependency is a property of natural sounds, and is not due purely to our choice of linear filters. For example, no such dependency is observed when the input consists of white noise (see Fig. 1). The strength of this dependency varies for different pairs of linear filters. In addition, we see this type of dependency between instantaneous responses of a single filter at two Speech Drums • I~~; ~ • -1 Cat Monkey o Nocturnal nature White noise Figure 1: Joint conditional histogram of instantaneous linear responses of two bandpass filters with center frequencies 2000 and 2840 Hz. Pixel intensity corresponds to frequency of occurrence of a given pair of values, except that each column has been independently rescaled to fill the full intensity range. For the natural sounds, responses are not independent: the standard deviation of the ordinate is roughly proportional to the magnitude of the abscissa. Natural sounds were recorded from CDs and converted to sampling frequency of 22050 Hz. nearby time instants. Since the dependency involves the variance of the responses, we can substantially reduce it by dividing. In particular, the response of each filter is divided by a weighted sum of responses of other rectified filters and an additive constant. Specifically: L2 Ri = 12 2:j WjiLj + 0'2 (1) where Li is the instantaneous linear response of filter i, 0' is a constant and Wji controls the strength of suppression of filter i by filter j. We would like to choose the parameters of the model (the weights Wji, and the constant 0') to optimize the independence of the normalized response to an ensemble of natural sounds. Such an optimization is quite computationally expensive. We instead assume a Gaussian form for the underlying conditional distribution, as described in [15]: P (LiILj,j E Ni) '" N(O; L wjiL; + 0'2) j where Ni is the neighborhood of linear filters that may affect filter i. We then maximize this expression over the sound data at each time t to obtain the parameters: (2) We solve for the optimal parameters numerically, using conjugate gradient descent. Note that the value of 0' depends on the somewhat arbitrary scaling of the input signal (i.e., doubling the input strength would lead to a doubling of 0') . *' I I I I I I I 4 : 0 \,-____ ----:--' Other squared I filter resRonses \ Other squared filter responses , I I Figure 2: Nonlinear whitening of a natural auditory signal with a divisive normalization model. The histogram on the left shows the statistical dependency of the responses of two linear bandpass filters. The joint histogram on the right shows the approximate independence of the normalized coefficients. Figure 2 depicts our statistically derived neural model. A natural sound is passed through a bank of linear filters (only 2 depicted for readability). The responses of the filters to a natural sound exhibit a strong statistical dependency. Normalization largely removes this dependency, such that vertical cross sections through the joint conditional histogram are all roughly the same. For the simulations in the next section, we use a set of Gammatone filters as the linear front end [14]. We choose a primary filter with center frequency 2000 Hz. We also choose a neighborhood of filters for the normalization signal: 16 filters with center frequencies 205 to 4768 Hz, and replicas of all filters temporally shifted by 100, 200, and 300 samples. We compute optimal values for u and the normalization weights Wj using equation (2), based on statistics of a natural sound ensemble containing 9 animal and speech sounds, each approximately 6 seconds long. 2 Model simulations vs. physiology We compare the normalized responses of the primary filter in our model (with all parameter values held fixed at the optimal values described above) to data recorded electrophysiologically from auditory nerve. Figure 3 shows data from a "two-tone suppression" experiment, in which the response to an optimal tone is suppressed by the presence of a second tone of non-optimal frequency. Two-tone suppression is often demonstrated by showing that the rate-level function of the optimal tone alone is shifted to the right in the presence of a non-optimal tone. In both cell and model, we obtain a larger rightward shift when the non-optimal tone is relatively close in frequency to the optimal tone, and almost no rightward shift when the non-optimal tone is more than two times the optimal frequency. In the model, this behavior is due to the fact that the strength of statistical dependency (and thus the strength of normalization weighting) falls with the frequency separation of a pair of filters. Cell (Javel et al., 1978) 120 ,---~-~-~-~-~_____, -eQ) -e~ 100 Q) E> 80 ro J::: (J 60 .!!1 "0 c 40 ro Q) ::;; 20 30 40 50 60 70 80 Decibels 1ro ,-------~-~-~_____, -eQ) -e"§100 Q) ~ 80 ro J::: ~ 6O '6 c 40 ro Q) ::;; ro Q) ~ 100 Q) ~ 80 ro J::: ~ 6O '6 c 40 ro Q) ::;; ro 50 60 70 80 Decibels 40 50 60 70 80 Decibels Model --eno mask -emask = 1.2S"CF 20 30 70 80 --eno mask -emask = 1.5S·CF 20 30 70 80 --eno mask mask = 2.00*CF ro 30 40 60 60 m 80 Decibels Figure 3: Two tone suppression data. Each plot shows neural response as a function of SPL for a single tone (circles), and for a tone in the presence of a secondary suppressive tone at 80 dB SPL (squares). The maximum mean response rate in the model is scaled to fit the cell data. Cell data re-plotted from [16]. Cell (Rose et aI., 1971) 120',----------------, OJ 100 1;; ;U 80 ~ rn J:::60 o en '6 40 c rn OJ ::;; 20 Frequency Model Frequency Figure 4: Frequency tuning curves for cell and model for different sound pressure levels. Cell data are re-plotted from [17]. Figure 4 shows frequency tuning for different sound pressure levels. As the sound pressure level (SPL) increases, the frequency tuning becomes broader, developing a "shoulder" and a secondary mode. Both cell and model show similar behavior, despite the fact that we are not fitting the model to these data: all parameters in the model are chosen by optimizing the independence of the responses to the ensemble of natural sound statistics. This result is particularly interesting because the data have been in the literature for many years, and are generally interpreted to mean that the frequency tuning properties of these cells varies with SPL. Our model suggests an alternative interpretation: the fundamental frequency tuning is determined by a fixed linear kernel, and is modulated by a divisive nonlinearity. 3 Discussion We have developed a weighted divisive normalization model for early auditory processing. Both the form and parameters of the model are determined from natural sound statistics. We have shown that the model can account for some prominent nonlinearities occurring at the level of the auditory nerve. A number of authors have suggested forms of divisive gain control in auditory models. Wang et al. [18] suggest that gain control in early auditory processing is consistent with psychophysical data and might be advantageous for applications of noise removal. Auditory gain control is also a central concept in the work of Lyon (e.g., [19]). Our work may provide theoretical justification for such models of divisive gain control in the auditory system. Our model is limited in a number of important ways. The current model lacks a detailed specification of a physiological implementation. In particular, normalization must be presumably implemented using lateral or feedback connections between neurons [e.g., 20]. The normalization signal of the model is computed and applied instantaneously, and thus lacks temporal dynamical properties [e.g., 19]. In addition, we have not made any distinction between nonlinearities that arise mechanically in the cochlea, and nonlinearities that arise at the neural level. It is likely that normalization occurs at least partially in outer hair cells [21,22]. On a more theoretical level, we have not addressed mechanisms by which the system optimizes itself. Our modeling uses parameters optimized for a fixed ensemble of natural sounds. Biologically, this optimization would presumably occur on multiple time scales through processes of evolution, development, learning, and adaptation. The ultimate question regarding the independence hypothesis underlying our model is: how far can such a bottom-up criterion go toward explaining neural processing? It seems likely that the model can be extended to account for levels of processing beyond the auditory nerve. For example, Nelken et al. [23] suggest that co-modulation masking release in auditory cortex results from the statistical structure of natural sound. But ultimately, it seems likely that one must also consider the auditory tasks, such as localization and recognition, that the organism must perform. References [1] F Attneave. Some informational aspects of visual perception. P~ych. Rev., 61:183- 193, 1954. [2] H B Barlow. Possible principles underlying the transformation of sensory messages. In W A Rosenblith, editor, Sensory Communications, page 217. MIT Press, Cambridge, MA, 1961. [3] Y Dan and J J Atick ad R C Reid. Efficient coding of natural scenes in the lateral geniculate nucleus: Experimental test of a computational theory. J. Neuroscience, 16:3351-3362, 1996. [4] H Attias and C E Schreiner. Coding of naturalistic stimuli by auditory midbrain neurons. Adv in Neural Info Processing Systems, 10: 103-109, 1998. [5] WE Vinje and J L Gallant. Sparse coding and decorrelation in primary visual cortex during natural vision. Science, 287, Feb 2000. [6] B A Olshausen and D J Field. Natural image statistics and efficient coding. Network: Computation in Neural Systems, 7:333-339, 1996. [7] A J Bell and T J Sejnowski. The 'independent components' of natural scenes are edge filters. Vision Research, 37(23):3327-3338, 1997. [8] A J Bell and T J Sejnowski. Learning the higher-order structure of a natural sound. Network: Computation in Neural Systems, 7:261- 266,1996. [9] D L Ruderman and W Bialek. Statistics of natural images: Scaling in the woods. Phys. Rev. Letters, 73(6):814-817, 1994. [10] D J Heeger. Normalization of cell responses in cat striate cortex. Visual Neuroscience, 9:181198, 1992. [11] E P Simoncelli and 0 Schwartz. Image statistics and cortical normalization models. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors, Adv. Neural Information Processing Systems, volume 11, pages 153-159, Cambridge, MA, 1999. MIT Press. [12] M J Wainwright, 0 Schwartz, and E P Simoncelli. Natural image statistics and divisive normalization: Modeling nonlinearities and adaptation in cortical neurons. In R Rao, B Olshausen, and M Lewicki, editors, Statistical Theories of the Brain. MIT Press, 2001. To appear. [13] H Attias and C E Schreiner. Temporal low-order statistics of natural sounds. In M Jordan, M Kearns, and S Solla, editors, Adv in Neural Info Processing Systems, volume 9, pages 27-33. MIT Press, 1997. [14] M Slaney. An efficient implementation of the patterson and holdworth auditory filter bank. Apple Technical Report 35, 1993. [15] E P Simoncelli. Modeling the joint statistics of images in the wavelet domain. In Proc SPIE, 44th Annual Meeting, volume 3813, Denver, July 1999. Invited presentation. [16] E Javel, D Geisler, and A Ravindran. Two-tone suppression in auditory nerve of the cat: Rateintensity and temporal analyses. J. Acoust. Soc. Am., 63(4):1093- 1104, 1978. [17] J ERose, D J Anderson, and J F Brugge. Some effects of stimulus intensity on response of auditory nerve fibers in the squirell monkey. Journal Neurophys., 34:685-699, 1971. [18] K Wang and S Shamma. Self-normalization and noise-robustness in early auditory representations. In IEEE Trans. Speech and Audio Proc., volume 2, pages 421-435, 1994. [19] R F Lyon. Automatic gain control in cochlear mechanics. In P Dallos et al., editor, The Mechanics and Biophysics of Hearing, pages 395-420. Springer-Verlag, 1990. [20] M Carandini, D J Heeger, and J A Movshon. Linearity and normalization in simple cells of the macaque primary visual cortex. Journal of Neuroscience, 17:8621- 8644, 1997. [21] D Geisler. From Sound to Synapse: Physiology of the Mammalian Ear. Oxford University Press, New York, 1998. [22] H B Zhao and J Santos-Sacchi. Auditory collusion and a coupled couple of outer hair cells. Nature, 399(6734):359-362, 1999. [23] I Nelken, Y Rotman, and 0 Bar Yosef. Responses of auditory-cortex neurons to structural features of natural sounds. Nature, 397(6715):154-157, 1999.
2000
85
1,889
Regularized Winnow Methods Tong Zhang Mathematical Sciences Department IBM TJ. Watson Research Center Yorktown Heights, NY 10598 tzhang@watson.ibm.com Abstract In theory, the Winnow multiplicative update has certain advantages over the Perceptron additive update when there are many irrelevant attributes. Recently, there has been much effort on enhancing the Perceptron algorithm by using regularization, leading to a class of linear classification methods called support vector machines. Similarly, it is also possible to apply the regularization idea to the Winnow algorithm, which gives methods we call regularized Winnows. We show that the resulting methods compare with the basic Winnows in a similar way that a support vector machine compares with the Perceptron. We investigate algorithmic issues and learning properties of the derived methods. Some experimental results will also be provided to illustrate different methods. 1 Introduction In this paper, we consider the binary classification problem that is to determine a label y E {-1, 1} associated with an input vector x. A useful method for solving this problem is through linear discriminant functions, which consist of linear combinations of the components of the input variable. Specifically, we seek a weight vector W and a threshold () such that wT x < () if its label y = -1 and wT x 2: () if its label y = 1. Given a training set of labeled data ( Xl, yl ), . . . , (x n , yn ), a number of approaches to finding linear discriminant functions have been advanced over the years. In this paper, we are especially interested in the following two families of online algorithms: Perceptron [12] and Winnow [10]. These algorithms typically fix the threshold () and update the weight vector w by going through the training data repeatedly. They are mistake driven in the sense that the weight vector is updated only when the algorithm is not able to correctly classify an example. For the Perceptron algorithm, the update rule is additive: if the linear discriminant function misclassifies an input training vector xi with true label yi, then we update each component j of the weight vector was: Wj f- Wj + T]X~yi, where T] > 0 is a parameter called learning rate. The initial weight vector can be taken as W = O. For the (unnormalized) Winnow algorithm (with positive weight), the update rule is multiplicative: if the linear discriminant function misclassifies an input training vector xi with true label yi, then we update each component j of the weight vector was: Wj fWj exp( T]X~yi), where T] > 0 is the learning rate parameter, and the initial weight vector can be taken as Wj = f-Lj > O. The Winnow algorithm belongs to a general family of algorithms called exponentiated gradient descent with unnormalized weights (EGU) [9]. There can be several variants. One is called balanced Winnow, which is equivalent to an embedding of the input space into a higher dimensional space as: x = [x, -x]. This modification allows the positive weight Winnow algorithm for the augmented input x to have the effect of both positive and negative weights for the original input x. Another modification is to normalize the one-norm of the weight w so that 2:j Wj = W, leading to the normalized Winnow. Theoretical properties of multiplicative update algorithms have been extensively studied since the introduction of Winnow. For linearly separable binary-classification problems, both Perceptron and Winnow are able to find a weight that separate the in-class vectors from the out-of-class vectors in the training set within a finite number of steps. However, the number of mistakes (updates) before finding a separating hyperplane can be very different [10, 9]. This difference suggests that the two algorithms serve for different purposes. For linearly separable problems, Vapnik proposed a method that optimizes the Perceptron mistake bound which he calls "optimal hyperplane" (see [15]). The same method has also appeared in the statistical mechanical learning literature (see [1, 8, 11]), and is referred to as achieving optimal stability. For non-separable problems, a generalization of optimal hyperplane was proposed in [2] by introducing a "soft-margin" loss term. In this paper, we derive regularized Winnow methods by constructing "optimal hyperplanes" that minimize the Winnow mistake bound (rather than the Perceptron mistake bound as in an SVM). We then derive a "soft-margin" version of the algorithms for non-separable problems. For simplicity, we shall assume 0 = 0 in this paper. The restriction does not cause problems in practice since one can always append a constant feature to the input data x, which offset the effect of O. The formulation with 0 = 0 can be more amenable to theoretical analysis. For an SVM, a fixed threshold also allows a simple Perceptron like numerical algorithm as described in chapter 12 of [13], and in [7]. Although more complex, a non-fixed 0 does not introduce any fundamental difficulty. The paper is organized as follows. In Section 2, we review mistake bounds for Perceptron and Winnow. Based on the bounds, we show how regularized Winnow methods can be derived by mimicking the optimal stability method (and SVM) for Perceptron. We also discuss the relationship of the newly derived methods with related methods. In Section 3, we investigate learning aspects of the newly proposed methods in a context similar to some known SVM results. An example will be given in Section 4 to illustrate these methods. 2 SVM and regularized Winnow 2.1 From Perceptron to SVM We review the derivation of SVM from Perceptron, which serves as a reference for our derivation of regularized Winnow. Consider linearly separable problems and let W be a weight that separates the in-class vectors from the out-of-class vectors in the training set. It is well known that the Perceptron algorithm computes a weight that correctly classifies all training data after at most M updates (a proof can be found in [15]) where M = IIwll~ max; Ilxill~/(miI1i wT xi)2. The weight vector w* that minimizes the right hand side of the bound is called the optimal hyperplane in [15] or the optimal stability hyperplane in [1, 8, 11]. This optimal hyperplane is the solution to the following quadratic programming problem: . 1 2 mln-w w 2 For non-separable problems, we introduce a slack variable f.i for each data point (xi, yi) (i = 1, ... ,n), and compute a weight vector w. (C) that solves Where C > 0 is a given parameter [15]. It is known that when C --+ 00, f.i --+ 0 and w. (C) converges to the weight vector w. of the optimal hyperplane. We can write down the KKT condition for the above optimization problem, and let Qi be the Lagrangian multiplier for wT xiyi ~ 1 - f.i . After elimination of wand f., we obtain the following dual optimization problem of the dual variable Q (see [15], chapter 10 for details): m;-x 2: Qi ~(2: Qixiyi)2 s.t. Qi E [0, CJ for i = 1, ... ,n. i i The weight w.(C) is given by w.(C) = I:i Qixiyi at the optimal solution. To solve this problem, one can use the following modification of the Perceptron update algorithm (see [7] and chapter 12 of [13]): at each data point (xi, yi), we fix all Qk with k f:. i, and update Qi to maximize the dual objective functional, which gives: Qi --+ max(min(C, Qi + 7](1 _ wT xiyi)), 0), where w = I:i Qixiyi. The learning rate 7] can be set as 7] = 1/ xiT xi which corresponds to the exact maximization of the dual objective functional. 2.2 From Winnow to regularized Winnow Similar to Perceptron, if a problem is linearly separable with a positive weight w, then Winnow computes a solution that correctly classifies all training data after at most M updates with M = 2W(I:j Wj In ::IIII~II:) max,; Ilxill~/P, where 0 < 6 ::; milli wT xiyi, W ~ Ilwlll and the learning rate is 7] = 6 /(W maXi Ilxi II~). The proof of this specific bound can be found in [16] which employed techniques in [5] (also see [10] for earlier results). Note that unlike the Perceptron mistake bound, the above bound is learning rate dependent. It also depends on the prior J.Lj > 0 which is the initial value of w in the basic Winnows. For problems separable with positive weights, to obtain an optimal stability hyperplane associated with the Winnow mistake bound, we consider fixing Ilwlll such that Ilwlll = W > O. It is then natural to define the optimal hyperplane as the (positive weight) solution to the following convex programming problem: . '" 1 Wj mm~wj nW j eJ.Lj Tii>lf·1 s.t. w x Y _ or z = , ... , n. We use e to denote the base of natural logarithm. Similar to the derivation of SVM, for non-separable problems, we introduce a slack variable f.i for each data point (xi, yi), and compute a weight vector w. (C) that solves w· . min2: wj lnJ +C2:c W,(. eJ.Lj . J ' Where C > 0 is a given parameter. Note that to derive the above methods, we have assumed that Ilwlll is fixed at Ilwlll = 11J.L111 = W, where W is a given parameter. This implies that the derived methods are in fact regularized versions of the normalized Winnow. One can also ignore this normalization constraint so that the derived methods correspond to regularized versions of the unnormalized Winnow. The entropy regularization condition is natural to all exponentiated gradient methods [9], as can be observed from the theoretical results in [9]. The regularized normalized Winnow is closely related to the maximum entropy discrimination [6] (the two methods are almost identical for linearly separable problems). However, in the framework of maximum entropy discrimination, the Winnow connection is non-obvious. As we shall show later, it is possible to derive interesting learning bounds for our methods that are connected with the Winnow mistake bound. Similar to the SVM formulation, the non-separable formulation of regularized Winnow approaches the separable formulation as C -+ 00. We shall thus only focus on the nonseparable case below. Also similar to an SVM, we can write down the KKT condition and let o:i be the Lagrangian multiplier for wT xiyi ~ 1 - ei. After elimination of wand e, we obtain (the algebra resembles that of [15], chapter 10, which we shall skip due to the limitation of space) the following dual formulation for regularized unnormalized Winnow: m.;x L o:i - L f-Lj exp(L o:ix~yi) s.t. o:i E [0, CJ for i = 1, ... ,n. . j . The j-th component of weight w*(C) is given by w*(C)j = f-Lj exp(2:i o:ix~yi) at the optimal solution. For regularized normalized Winnow with IIWllt = W > 0, we obtain m.;x L o:i - W In(L f-Lj exp(L o:ix;yi)) s.t. o:i E [0, CJ for i = 1, ... ,n. i j i The weight w*(C) is given by w*(C)j = Wf-Lj exp(2:i o:ix~yi)/ 2:j f-Lj exp(2:i o:ix~yi) at the optimal solution. Similar to the Perceptron-like update rule for the dual SVM formulation, it is possible to derive Winnow-like update rules for the regularized Winnow formulations. At each data point (xi, yi), we fix all O:k with k -# i, and update O:i to maximize the dual objective functionals. We shall not try to derive an analytical solution, but rather use a gradient ascent method with a learning rate TJ: O:i -+ O:i + TJ a:, LD (O:i), where we use LD to denote the dual objective function to be maximized. TJ can be either fixed as a small number or computed by the Newton's method. It is not hard to verify that we obtain the following update rule for regularized unnormalized Winnow: o:i -+ max(min(C, o:i + TJ(1 - wT xiyi)), 0), where Wj = f-Lj exp(2:i o:ixjyi). This gradient ascent on the dual variable gives an EGU rule as in [9]. Compared WIth the SVM dual update rule which is a soft-margin version of the Perceptron update rule, this method naturally corresponds to a soft-margin version of unnormalized Winnow update. Similarly, we obtain the following dual update rule for regularized normalized Winnow: o:i -+ max(min(C, o:i + TJ(1 - wT xiyi)), 0), where Wj = Wf-Lj exp(2:i o:ix~yi)/ 2:j f-Lj exp(2:i o:ix~yi). Again, this rule (which is an EG rule in [9]) can be naturally regarded as the soft-margin version of the normalized Winnow update. In our experience, these update rules are numerically very efficient. Note that for regularized normalized Winnow, the normalization constant W needs to be carefully chosen based on the data. For example, if data is infinity-norm bounded by 1, then it does not seem to be appropriate if we choose W ::; 1 since IwT x I ::; 1: a hyperplane with IIWllt ::; 1 does not achieve reasonable margin. This problem is less crucial for unnormalized Winnow, but the norm of the initial weight f-Lj still affects the solution. Besides maximum entropy discrimination which is closely related to regularized normalized Winnow, a large margin version of unnormalized Winnow has also been proposed based on some heuristics [3,4]. However, their algorithm was purely mistake driven without dual variables o:i (the algorithm does not compute an optimal stability hyperplane for the Winnow mistake bound). In addition, they did not include a regularization parameter C which in practice may be important for non-separable problems. 3 Some statistical properties of regularized Winnows In this section, we derive some learning bounds based on our formulations that minimize the Winnow mistake bound. The following result is an analogy of a leave-one-out crossvalidation bound for separable SVMs Theorem 10.7 in [15]. Theorem 3.1 The expected misclassification error errn with the true distribution by using hyperplane W obtained from the linearly separable (C = 00) unnormalized regularized Winnow algorithm with n training samples is bounded by errn < n~l Emin(K, 1.5W(2:j Wj In ~) max; Ilx i ll~), where the right-hand side expectation is taken with n + 1 random samples (xl, yl), ... , (xn+l, yn+l). K is the number ofsupport vectors of the solution. Let W be the optimal solution using all the samples with dual a i for i = 1, . .. , n + 1. Let w k be the weight obtained from setting a k = 0, then W = max(llwI11, Ilwllll, ... ,llwn+llld. Proof Sketch. We only describe the major steps due to the limitation of space. Denote by illk the weight obtained from the optimal solution by removing (xk , yk) from the training sample. Similar to the proof of Theorem 10.7 in [15], we need to bound the leave-oneout cross-validation error, which is at most K. Also note that the leave-one-out crossvalidation error is at most I{k : Ilillk - wlllllxklloo ~ I}I. We then use the following two inequalities: Ilillk - wllf ::::: 2W(2:j ill] - Wj - Wj In( ill] /Wj )); and 2:j ill] - Wj Wj In( ill] / Wj) ::::: 2:j wJ - Wj - Wj In( wJ / Wj) the latter inequality can be obtained by comparing the dual objective functionals and by using the corresponding KKT condition of the dual problem. The remaining problem is now reduced to proving that I {k : 2:j wJ Wj - Wj In(w] /Wj) ~ 1/(2Wllxkll~)} 1 ::::: ..J2w 2:j Wj In ~ . For the dual formulation, by summing over index k of the KKT first order condition with respective to the dual a k , multiplied by a k , one obtains 2:k a k = 2:j Wj In~. We thus only need to show that if 2:j w] - Wj - Wj In(w]/wj) ~ 1/(2Wllxkll~), then ak ~ 2/(3Wllxkll~). This can be checked directly through Taylor expansion. 0 By using the same technique, we may also obtain a bound for regularized normalized Winnow. One disadvantage of the above bound is that it is the expectation of a random estimator that is no better than the leave-one-out cross-validation error based on observed data. However, the bound does convey some useful information: for example, we can observe that the expected misclassification error (learning curve) converges at a rate of 0 (1/ n) as long as W(2:j Wj In ~) and sup Ilxlloo are reasonably bounded. It is also not difficult to obtain interesting PAC style bounds by using the covering number result for entropy regularization in [16] and ideas in [14]. Although the PAC analysis would imply a slightly suboptimal learning curve of O(log n/n) for linearly separable problems, the bound itself provides a probability confidence and can be generalized to non-separable problems. We state below an example for non-separable problems, which justifies the entropy regularization. The bound itself is a direct consequence of Theorem 2.2 and a covering number result with entropy regularization in [16]. Note that as in [14], the square root can be removed if k-y = 0; "( can also be made data-dependent. Theorem 3.2 If the data is infinity-norm bounded as Ilxlloo ::::: b, then consider the family r of hyperplanes W such that Ilwlll ::::: a and 2:j Wj In(::III~\\~) ::::: c. Denote by err( w) the misclassification error of W with the true distribution. Then there is a constant C such that for any "( > 0, with probability 1 TJ over n random samples, any W E r satisfies: k err(w) < ..2 + n C nab 1 -2-b2(a2 + ac) In(- + 2) + In-, "( n "( TJ where k-y = I{ i : wT xiyi < 'Y}I is the number afsamples with margin less than 'Y4 An example We use an artificial dataset to show that a regularized Winnow can enhance a Winnow just like an SVM can enhance a Perceptron. In addition, it shows that for problems with many irrelevant features, the Winnow algorithms are superior to the Perceptron family algorithms. The data in this experiment are generated as follows. We select an input data dimension d, with d = 500 or d = 5000. The first 5 components of the target linear weight w are set to ones; the 6th component is -1; and the remaining components are zeros. The linear threshold 9 is 2. Data are generated as random vectors with each component randomly chosen to be either 0 or 1 with probability 0.5 each. Five percent of the data are given wrong labels. The remaining data are given correct labels, but we remove data with margins that are less than 1. One thousand training and one thousand test data are generated. We shall only consider balanced versions of the Winnows. We also compensate the effect of 9 by appending a constant 1 to each data point, as mentioned earlier. We use UWin and NWin to denote the basic unnormalized and normalized Winnows respectively. LMUWin and LM -NWin denote the corresponding large margin versions. The SVM sty Ie large margin Perceptron is denoted as LM-Perc. We use 200 iterations over the training data for all algorithms. The initial values for the Winnows are set to be the priors: f-Lj = 0.01. For online algorithms, we fix the learning rates at 0.01. For large margin Winnows, we use learning rates TJ = 0.01 in the gradient ascent update. For (2-norm regularized) large margin Perceptron, we use the exact update which corresponds to a choice TJ = 1/ xiT xi. Accuracies (in percentage) of different methods are listed in Table 1. For regularization methods, accuracies are reported with the optimal regularization parameters. The superiority of the regularized Winnows is obvious, especially for high dimensional data. Accuracies of regularized algorithms with different regularization parameters are plotted in Figure 1. These behaviors are very typical for regularized algorithms. In practice, the optimal regularization parameter can be found by cross-validation. dimension Perceptron LM-NWin 82.2 67.9 Table 1: Testset accuracy (in percentage) on the artificial dataset - '- - -,f;. __ ~ I I I ,4 II , , L . -, I I I , I I I , , I , ~ _ _ -o- __ o- __ 70 ___ *- __ 0- __ ..... __ .. __ • __ .. __ • __ \ ... __ .. __ , 94.3 88.6 ~~ · ~~,~.·~~"·~~"~·~~"·~~'if ~~ . ~~,~ •. ~~".~~"~.~~",~~,, d = 500 d = 5000 Figure 1: Testset accuracy (in percentage) as a function of>. = n~ 5 Conclusion In this paper, we derived regularized versions of Winnow online update algorithms. We studied algorithmic and theoretical properties of the newly obtained algorithms, and compared them to the Perceptron family algorithms. Experimental results indicated that for problems with many irrelevant features, the Winnow family algorithms are superior to Perceptron family algorithms. This is consistent with the implications from both the online learning theory, and learning bounds obtained in this paper. References [1] lK Anlauf and M. Biehl. The AdaTron: an adaptive perceptron algorithm. Europhys. Lett., 10(7):687-692, 1989. [2] C. Cortes and V,N. Vapnik. Support vector networks. Machine Learning, 20:273-297, 1995. [3] I. Dagan, Y. Karov, and D. Roth. Mistake-driven learning in text categorization. In Proceedings of the Second Conference on Empirical Methods in NLP, 1997. [4] A. Grove and D. Roth. Linear concepts and hidden variables. Machine Learning, 2000. To Appear; early version appeared in NIPS-IO. [5] A.J. Grove, N. Littlestone, and D. Schuurmans. General convergence results for linear discriminant updates. In Proc. lOthAnnu. Con! on Comput. Learning Theory, pages 171-183,1997. [6] Tommi Jaakkola, Marina Meila, and Tony Jebara. Maximum entropy discrimination. In S.A. Solla, T.K Leen, and K-R. Miiller, editors, Advances in Neural Information Processing Systems 12, pages 470-476. MIT Press, 2000. [7] T.S. Jaakkola, Mark Diekhans, and D. Haussler. A discriminative framework for detecting remote protein homologies. Journal of Computational Biology, to appear. [8] W. Kinzel. Statistical mechanics of the perceptron with maximal stability. In Lecture Notes in Physics, volume 368, pages 175-188. Springer-Verlag, 1990. [9] l Kivinen and M.K Warmuth. Additive versus exponentiated gradient updates for linear prediction. Journal of Information and Computation, 132: 1-64, 1997. [10] N. Littlestone. Learning quickly when irrelevant attributes abound: a new linearthreshold algorithm. Machine Learning, 2:285-318, 1988. [11] M. Opper. Learning times of neural networks: Exact solution for a perceptron algorithm. Phys. Rev. A, 38(7):3824-3826, 1988. [12] F. Rosenblatt. Principles of Neurodynamics,' Perceptrons and the Theory of Brain Mechanisms. Spartan, New York, 1962. [l3] Bernhard Scholkopf, Christopher l C. Burges, and Alexander l Smola, editors. Advances in Kernel Methods,' Support Vector Learning. The MIT press, 1999. [14] l Shawe-Taylor, P.L. Bartlett, R.C. Williamson, and M. Anthony. Structural risk minimization over data-dependent hierarchies. IEEE Trans. In! Theory, 44(5):19261940,1998. [15] V.N. Vapnik. Statistical learning theory. John Wiley & Sons, New York, 1998. [16] Tong Zhang. Analysis of regularized linear functions for classification problems. Technical Report RC-21572, IBM, 1999. Abstract in NIPS'99, pp. 370-376.
2000
86
1,890
FaceSync: A linear operator for measuring synchronization of video facial images and audio tracks Malcolm Slaney! Interval Research malcolm@ieee.org Michele Covell2 Interval Research covell@ieee.org Abstract FaceSync is an optimal linear algorithm that finds the degree of synchronization between the audio and image recordings of a human speaker. Using canonical correlation, it finds the best direction to combine all the audio and image data, projecting them onto a single axis. FaceSync uses Pearson's correlation to measure the degree of synchronization between the audio and image data. We derive the optimal linear transform to combine the audio and visual information and describe an implementation that avoids the numerical problems caused by computing the correlation matrices. 1 Motivation In many applications, we want to know about the synchronization between an audio signal and the corresponding image data. In a teleconferencing system, we might want to know which of the several people imaged by a camera is heard by the microphones; then, we can direct the camera to the speaker. In post-production for a film, clean audio dialog is often dubbed over the video; we want to adjust the audio signal so that the lip-sync is perfect. When analyzing a film, we want to know when the person talking is in the shot, instead of off camera. When evaluating the quality of dubbed films, we can measure of how well the translated words and audio fit the actor's face. This paper describes an algorithm, FaceSync, that measures the degree of synchronization between the video image of a face and the associated audio signal. We can do this task by synthesizing the talking face, using techniques such as Video Rewrite [1], and then comparing the synthesized video with the test video. That process, however, is expensive. Our solution finds a linear operator that, when applied to the audio and video signals, generates an audio-video-synchronization-error signal. The linear operator gathers information from throughout the image and thus allows us to do the computation inexpensively. Hershey and Movellan [2] describe an approach based on measuring the mutual information between the audio signal and individual pixels in the video. The correlation between the audio signal, x, and one pixel in the image y, is given by Pearson's correlation, r. The mutual information between these two variables is given by f(x,y) = -1/2 log(l-?). They create movies that show the regions of the video that have high correlation with the audio; 1. Currently at IBM Almaden Research, 650 Harry Road, San Jose, CA 95120. 2. Currently at Yes Video. com, 2192 Fortune Drive, San Jose, CA 95131. Standard Deviation of Testing Data FaceSync ~ . 50 <l1li ~ tt 10 40 20 16T~S 16T8 30 30 20 40 10 50 20 40 60 80 Figure 1: Connections between linear models Figure 2: Standard deviation of the relating audio, video and fiduciary points aligned facial images used to create the canonical model. from the correlation data, they estimate the centroid of the activity pattern and find the talking face. They make no claim of their algorithms ability to measure synchronization. FaceSync is an optimal linear detector, equivalent to a Wiener filter [3], which combines the information from all the pixels to measure audio-video synchronization. We developed our approach based on two surprisingly simple algorithms in computer-vision and audiovisual speech synthesis: EigenPoints [4] and ATR's multilinear facial synthesizer [5]. The relationship of these two algorithms to each other and to our problem is shown in Figure 1. EigenPoints [4] is an algorithm that finds a linear mapping between the brightness of a video signal and the location of fiduciary points on the face. At first, the validity of this mapping is not obvious; we might not expect the brightness of pixels on a face to covary linearly with x and y coordinates. It turns out, however, that the brightness of the image pixels, i(x,y), and the location of fiduciary points such as the comer of the mouth, Pi = (Xi' y), describe a function in a high-dimensional space. In the absence of occlusion, the combined brightness-fiduciary function is smoothly varying. Thus the derivatives are defined and a Taylor-series approximation is valid. The real surprise is that EigenPoints can find a linear approximation that describes the brightness-fiduciary space, and this linear approximation is valid over a useful range of brightness and control-point changes. Similarly, Yehia, Rubin, and Vatikiotis-Bateson at ATR [5] have shown that it is possible to connect a specific model of speech, the line-spectral pairs or LSP, with the position of fiduciary points on the face. Their multilinear approximation yielded an average correlation of 0.91 between the true facial locations and those estimated from the audio data. We derive a linear approximation to connect brightness to audio without the intermediate fiduciary points. Neither linear mapping is exact, so we had to determine whether the direct path between brightness and audio could be well approximated by a linear transform. We describe FaceSync in the next section. Fisher and his colleagues [6] describe a more general approach that finds a non-linear mapping onto subspaces which maximize the mutual information. They report results using a single-layer perceptron for the non-linear mapping. 2 FaceSync Algorithm FaceSync uses a face-recognition algorithm and canonical correlation to measure audiovisual synchrony. There are two steps: training or building the canonical correlation model, and evaluating the fit of the model to the data. In both steps we use face-recognition software to find faces and align them with a sample face image. In the training stage, canonical correlation finds a linear mapping that maximizes the cross-correlation between two signals: the aligned face image and the audio signal. Finally, given new audio and video data, we use the linear mapping to rotate a new aligned face and the audio signal into a common space where we can evaluate their correlation as a function of time. In both training and testing, we use a neural-network face-detection algorithm [7] to find portions of the image that contain a face. This approach uses a pyramid of images to search efficiently for pixels that look like faces. The software also allows the face to be tracked through a sequence of image and thus reduce the computational overhead, but we did not use this capability in our experiments. The output of Rowley's face-detection algorithm is a rectangle that encloses the position of a face. We use this information to align the image data prior to correlational analysis. We investigated a number of ways to describe the audio signal. We looked at mel-frequency cepstral coefficients (MFCC) [8], linear-predictive coding (LPC) [8], line spectral frequencies (LSF) [9], spectrograms, and raw signal energy. For most calculations, we used MFCC analysis, because it is a favorite front-end for speech-recognition systems and, as do several of the other possibilities, it throws away the pitch information. This is useful because the pitch information affects the spectrogram in a non-linear manner and does not show up in the image data. For each form of audio analysis, we used a window size that was twice the frame interval (2/29.97 seconds,) Canonical correlation analysis (CCA) uses jointly varying data from an input subspace Xi and an output subspace Yi to find canonic correlation matrices, A x and A y . These matrices whiten the input and output data, as well as making the cross correlation diagonal and "maximally compact." Specifically, the whitened data matrices are 11 = A: (x - x) and cp = A~ (y - y), (1) and have the following properties: E{l1l1 T } = I, E{cpcpT} = I, E{<P11T} = LK = diag{cr1, cr2, ... , crL }, (2) where 1 ::": cr 1 ::": cr2 ::": ... > 0 and cr M + 1 = ... = cr L = O. In addition, for i starting from 1 and then repeating up to L, cri is the largest possible correlation between 11i and <Pi (where 11i and <Pi are the ilh elements of 11 and <P respectively), given the norm and orthogonality constraints on 11 and <p, expressed in equation 2. We refer to this property as maximal compaction, since the correlation is (recursively) maximally compacted into the leading elements of 11 and <p. We find the matrices A x and Ay by whitening the input and output data: , R- l12( -) d' R - 1I2( -) (3) x = xx X - x an y = yy y - y and then finding the left (U) and right (V) singular vectors of the cross-correlation matrix between the whitened data - 112 -112 T K=Ry'x·=Ryy RyxRxx =UKLKVK . (4) The SVD gives the same type of maximal compaction that we need for the cross correlation matrices, A x and A y ' Since the SVD is unique up to sign changes (and a couple of other degeneracies assocIated with repeated singular values), A x and A x must be: - 1/ 2 - 1/ 2 Ax = Rxx VK and Ay = Ryy UK' We can verify this by calculating E { <P11 T} using the definitions of <P and 11 . (5) T _ - 112 T _ T - 112 _ <P = A}'(y-y) = (Ryy UK) (y-y) = UKRyy (y-y), T _ - 112 T _ T - 112 _ 11 = Ax(x-x) = (Rxx VK ) (x-x) = VKRxx (x-x), (6) (7) then note T T - 112 T - 112 T - 112 - 112 E{<P11 } = UKRyy E{yx }Rxx VK = UKRyy RyxRxx VK (8) and then by using equation 4 (twice) T T T T T T E{<Pl1 } = UKKVK = UK(UKLKVK)VK = (UKUK)LK(VKVK) = LK . (9) This derivation of canonical correlation uses correlation matrices. This introduces a wellknown problem due to doubling the dynamic range of the analysis data. Instead, we formulate the estimation equations in terms of the components of the SVDs of the training data matrices. Specifically, we take the SVDs of the zero-mean input and output matrices: [x 1-x ... xN-x] = IN-IUxLxV~'[YI-Y'''YN-Y] = IN-IUyLyV~.(10) From these two decompositions, we can write the two correlation matrices as 2 T - 112 - 1 T Rxx = UxLxUx Rxx = UxLx Ux ' Ryy = U L2UT R- 1I2 = U L- 1UT y y y yy y y y and then write the cross-correlation matrix as T T Ryx = UyLyVy VxLxUx' Using these expressions for the correlation matrices, the K matrix becomes - 1 T T T - 1 T T T K = (UyL yUy )(UyL1VY VxLxUx)(UxLx Ux ) = UyVy VxUx ' Now let's look at the quantity U y K U x in terms of its SVD T T T T T UyKUx= Vy Vx = (UyUK)LK(VKU) = UUKULKVUKU' and, due to the uniqueness of the SVD, note T T UyUk = UUKU and UxVK = V UKU · (11) (12) (13) (14) (15) (16) Now we can rewrite the equation for A x to remove the need for the squaring operation - 1/ 2 - 1 T - 1 Ax = Rxx V K = UxLx (Ux V K) = UxLx V UKU (17) and similarly for Ay - 112 - 1 T - 1 Ay = Ryy UK= UyLy (UyUK) = UyLy UUKU' (18) Using these identities, we compute A x and Ay using the following steps: 1) Find the SVDs of the data matrices using the expressions in equation 10. 2) Form a rotated version of the cross-correlation matrix K and computes its SVD using equation 14. 3) Compute the A x and Ay matrixes using equations 17 and 18. Given the linear mapping between audio data and the video images, as described by the A x and A matrices, we measure the correlation between these two sets of data. For each candidate face in the image, we rotate the audio data by the first column of Ax ' rotate the face image by the first column of A , and then compute Pearson's correlation of the rotated audio and video data. We use the absolute value of this correlation coefficient as a measure of audio-video synchronization. 3 Results We evaluated the performance of the FaceSync algorithm using a number of tests. In the simplest tests we measured FaceSync's sensitivity to small temporal shifts between the audio and the video signals, evaluated our performance as a function of testing-window size and looked at different input representations. We also measured the effect of coarticulation. To train the FaceSync system, we used 19 seconds of video. We used Rowley's face-detection software to find a rectangle bounding the face but we noticed a large amount (several -5r--------,,---------,,-----, -10 -15 4~-----:'-------:8'-------:-'.1 0 Rotated Audio Data Figure 3: Optimum projections of the audio and video signals that maximize their cross-correlation_ AN Sync with MFCC Analysis (testing data) 0.5 ,-------'-----,r-----.----''--"'--r--=---'--, 0.4 0.3 0.2 0.1 ~0~0---~5~0-~~0-~-5~0~-~100 Frame Offset Figure 4: Correlation of audio and video data as the audio data is shifted in time past the video. (29.97 frames/sec.) pixels) of jitter in the estimated positions. Figure 2 shows the standard deviation of our aligned facial data. The standard deviation is high along the edges of the face, where small amounts of motion have a dramatic effect on the brightness, and around the mouth, where the image brightness changes with the spoken sounds. Figure 3 shows the results of the canonical-correlation analysis for the 7 (distinct) seconds of audio and video that we used for testing. Canonical correlation has rotated the two multidimensional signals (audio and image) into the directions that are maximally correlated with each other. Note that the transformed audio and image signals are correlated. We can evaluate the quality of these results by looking at the correlation of the two sets of data as the audio and image data are shifted relative to each other (such shifts are the kinds of errors that you would expect to see with bad lip sync.) An example of such a test is shown in Figure 4. Note that, after only a few frames of shift (about lOOms), the correlation between the audio and image data declined to close to zero. We used the approach described by Hershey and Movellan to analyze which parts of the facial image correlate best with the audio data. In their work, they computed correlations over 16 frame intervals. Since we used aligned data, we could measure accurately the correlations over our entire 9 second test sequence. Our results are shown in Figure 5: Each pixel shows the correlation that we found using our data. This approach looks at each pixel individually and produces a maximum correlation near 0.45. Canonical correlation, which accumulates all the pixel information from all over the image, also produces a maximum correlation near 0.45, but by accumulating information from all over the image it allows us to measure sychronization without integrating over the full 9 seconds. Figure 6 shows FaceSync's ability to measure audio-visual synchronization as we varied the testing-window size. For short windows (less than 1.3 seconds), we had insufficient data to measure the correlation accurately. For long windows (greater than 2.6 seconds), we had sufficient data to average and minimize the effect of errors, but as a result did not have high time resolution. As shown in Figure 5, there is a peak in the correlation near 0 frame offset; there are often, however, large noise peaks at other shifts. Between 1.3 and 2.6 seconds of video produces reliable results. Different audio-analysis techniques provide different information to the FaceSync algorithm. Figure 7 shows the audio-video synchronization correlation, similar to Figure 3, for several different kinds of analysis. LPC and LSF produced identical narrow peaks; MFCC produced a slightly lower peak. Hershey used the power from the spectrogram in his algorithm to detect the visual motion. However, our result for spectrogram data is in the noise, indicating that a linear model can not use spectrogram data for fine-grain temporal measurements. Correlation between audio energy and video pixels (r) 10 20 30 40 50 20 40 60 80 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 Figure 5: Correlation of each separate pixel and audio energy over the entire 9 second test sequence [2]. 20 Frame Windows 40 Frame Windows 80 Frame Windows 160 Frame Windows ~5:~ ~5:[3 50E::] 50c::] 100 200 300 400 100 200 300 Frame Number Frame Number Figure 6: Perfonnance of the FaceSync algorithm as a function of test window length. We would like to see a large peak (dark line) for all frames at zero shift. We also looked at FaceSync's perfonnance when we enhanced the video model with temporal context. Normally, we use one image frame and 67 ms of audio data as our input and output data. For this experiment, we stacked 13 images to fonn the input to the canonicalcorrelation algorithm Our perfonnance did not vary as we added more visual context, probably indicating that a single image frame contained all of the infonnation that the linear model was able to capture. As the preceding experiment shows, we did not improve the performance by adding more image context. We can, however, use the FaceSync framework with extended visual context to learn something about co-articulation. Coarticulation is a well-known effect in speech; the audio and physical state of the articulators not only depends on the current phoneme, but also on the past history of the phonemic sequence and on the future sounds. We let canonical correlation choose the most valuable data, across the range of shifted video images. Summing the squared weighting terms gives us an estimate of how much weight canonical correlation assigned to each shifted frame of data. Figure 8 shows that one video frame (30ms) before the current audio frame, and four video frames (120ms) after the current audio are affected by coarticulation. Interestingly, the zero-shift frame is not the one that shows the maximum importance. Instead, the frames just before and after are more heavily weighted. 4 Conclusions We have described an algorithm, FaceSync, that builds an optimal linear model connecting the audio and video recordings of a person's speech. The model allows us to measure the degree of synchronization between the audio and video, so that we can, for example, determine who is speaking or to what degree the audio and video are sychronized. While the goal of Hershey's process is not a temporal synchronization measurement, it is still interesting to compare the two approaches. Hershey's process does not take into account the mutual information between adjacent pixels; rather, it compares mutual information for individual pixels, then combines the results by calculating the centroid. In contrast, FaceSync asks what combination of audio and image data produces the best possible correlation, thus deriving a single optimal answer. Although the two algorithms both use Pearson's correlation to measure sychronization, FaceSync combines the pixels of the face and the audio information in an optimal detector. The performance of the FaceSync algorithm is dependent on both training and testing data sizes. We did not test the quality of our models as we varied the training data. We do the training calculation only once using all the data we have. Most interesting applications of o.s i----,-----r;:=======i"] 0.4 0.3 0.2 0 .1 o '--_....l!:."--'.!!....lI.:""-""" -100 -so o Frame Offset MFCC LPC LSF Spectrogram Power so 100 4 s6mmed weight for each frame position 3.S Video in Video in >past future ~ 3 Ql <: W o>2.S <: E 0> 2 .ijj :;= I.S 1 -40 -20 0 20 40 Delta Frame Figure 7: Performance of the FaceSync algoFigure 8: Contributions of different frames rithm for different kinds of input representato the optimum correlation with the audio tions. frame FaceSync depend on the testing data, and we would like to know how much data is necessary to make a decision. In our FaceSync application, we have more dimensions (pixels in the image) than examples (video frames). Thus, our covariance matrices are singular, making their inversionwhich we do as part of canonical correlation problematic. We address the need for a pseudo-inverse, while avoiding the increased dynamic range of the covariance matrices, by using an SVD on the (un squared) data matrices themselves (in place of an eigendecomposition of the covariance matrices). We demonstrated high linear correlations between the audio and video signals, after we first found the optimal projection direction by using canonical correlation. We evaluated the FaceSync algorithm by measuring the correlation between the audio and video signals as we shift the audio data relative to the image data. MFCC, LPC, and LSF all produce sharp correlations as we shift the audio and images, whereas speech power and spectrograms produce no correlation peak at all. References [1] C. Bregler, M. Covell, M. Slaney. "Video Rewrite: Driving visual speech with audio." Proc. SIGGRAPH 97, Los Angeles, CA, pp. 353- 360, August 1997. [2] J. Hershey, J. R. Movellan. "Audio-Vision: Locating sounds via audio-visual synchrony." Advances in Neural Information Processing Systems 12, edited by S. A. Solla, T. K. Leen, K-R. Mi.iller. MIT Press, Cambridge, MA (in press). [3] L. L. Scharf, John K. Thomas. "Wiener filters in canonical coordinates for transform coding, filtering and quantizing." IEEE Transactions on Signal Processing, 46(3), pp. 647- 654, March 1998. [4] M. Covell, C. Bregler. "Eigenpoints." Proc. Int. Con! Image Processing, Lausanne, Switzerland, Vol. 3,pp.471-474, 1996. [5] H. C. Yehia, P. E. Rubin, E. Vatikiotis-Bateson. "Quantitative association of vocal-tract and facial behavior," Speech Communication, 26, pp. 23-44, 1998. [6] J. W. Fisher III, T. Darrell, W. T. Freeman, P. Viola. "Learning Joint Statistical Models for AudioVisual Fusion and Segregation," This volume, 2001. [7] H. A. Rowley, S. Baluja, and T. Kanade. "Neural network- based face detection." IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(1), pp. 23- 38, January 1998. [8] L. Rabiner, B. Juang. Fundamentals of Speech Recognition. Prentice Hall, Englewood Cliffs, New Jersey, 1993. [9] N. Sugamura, F. Itakura, "Speech analysis and synthesis methods developed at ECL in NTTFrom LPC to LSP." Speech Communications, 4(2), June 1986.
2000
87
1,891
Mixtures of Gaussian Processes Volker Tresp Siemens AG, Corporate Technology, Department of Neural Computation Otto-Hahn-Ring 6,81730 Miinchen, Germany Volker. Tresp@mchp.siemens.de Abstract We introduce the mixture of Gaussian processes (MGP) model which is useful for applications in which the optimal bandwidth of a map is input dependent. The MGP is derived from the mixture of experts model and can also be used for modeling general conditional probability densities. We discuss how Gaussian processes -in particular in form of Gaussian process classification, the support vector machine and the MGP modelcan be used for quantifying the dependencies in graphical models. 1 Introduction Gaussian processes are typically used for regression where it is assumed that the underlying function is generated by one infinite-dimensional Gaussian distribution (i.e. we assume a Gaussian prior distribution). In Gaussian process regression (GPR) we further assume that output data are generated by additive Gaussian noise, i.e. we assume a Gaussian likelihood model. GPR can be generalized by using likelihood models from the exponential family of distributions which is useful for classification and the prediction of lifetimes or counts. The support vector machine (SVM) is a variant in which the likelihood model is not derived from the exponential family of distributions but rather uses functions with a discontinuous first derivative. In this paper we introduce another generalization of GPR in form of the mixture of Gaussian processes (MGP) model which is a variant of the well known mixture of experts (ME) model of Jacobs et al. (1991). The MGP model allows Gaussian processes to model general conditional probability densities. An advantage of the MGP model is that it is fast to train, if compared to the neural network ME model. Even more interesting, the MGP model is one possible approach of addressing the problem of input-dependent bandwidth requirements in GPR. Input-dependent bandwidth is useful if either the complexity of the map is input dependent -requiring a higher bandwidth in regions of high complexity- or if the input data distribution is input dependent. In the latter case, one would prefer Gaussian processes with a higher bandwidth in regions with many data points and a lower bandwidth in regions with lower data density. If GPR models with different bandwidths are used, the MGP approach allows the system to self-organize by locally selecting the GPR model with the appropriate optimal bandwidth. Gaussian process classifiers, the support vector machine and the MGP can be used to model the local dependencies in graphical models. Here, we are mostly interested in the case that the dependencies of a set of variables y is modified via Gaussian processes by a set of exogenous variables x. As an example consider a medical domain in which a Bayesian network of discrete variables y models the dependencies between diseases and symptoms and where these dependencies are modified by exogenous (often continuous) variables x representing quantities such as the patient's age, weight or blood pressure. Another example would be collaborative filtering where y might represent a set of goods and the correlation between customer preferences is modeled by a dependency network (another example of a graphical model). Here, exogenous variables such as income, gender and social status might be useful quantities to modify those dependencies. The paper is organized as follows. In the next section we briefly review Gaussian processes and their application to regression. In Section 3 we discuss generalizations of the simple GPR model. In Section 4 we introduce the MGP model and present experimental results. In Section 5 we discuss Gaussian processes in context with graphical models. In Section 6 we present conclusions. 2 Gaussian Processes In Gaussian Process Regression (GPR) one assumes that a priori a function f(x) is generated from an infinite-dimensional Gaussian distribution with zero mean and covariance K(x, Xk) = cav (f (x) , f(Xk)) where K(x, Xk) are positive definite kernel functions. In this paper we will only use Gaussian kernel functions of the form K(X,Xk) = Aexp (_llx - xk112) 282 with scale parameter 8 and amplitude A. Furthermore, we assume a set of N training data D = {(Xk' Yk) H'=l where targets are generated following a normal distribution with variance (72 such that P(ylf(x)) ex exp ( - 2~2 (f(x) - y)2) . (1) The expected value j(x) to an input x given the training data is a superposition of the kernel functions of the form N j(x) = L WkK(X, Xk). (2) k=l Here, Wk is the weight on the k-th kernel. Let K be the N x N Gram matrix with (K)k,j = cov(f(Xk), f(xj)). Then we have the relation fm = Kw where the components of fm = (f(XI), ... ,f(XN ))' are the values of f at the location of the training data and W = (WI' ... ' WN )'. As a result of this relationship we can either calculate the optimal W or we can calculate the optimal fm and then deduce the corresponding w-vector by matrix inversion. The latter approach is taken in this paper. Following the assumptions, the optimal fm minimizes the cost function (3) such that jm = K(K + (72 I)-ly. Here y = (YI, ... ,YN)' is the vector of targets and I is the N-dimensional unit matrix. 3 Generalized Gaussian Processes and the Support Vector Machine In generalized Gaussian processes the Gaussian prior assumption is maintained but the likelihood model is now derived from the exponential family of distributions. The most important special cases are two-class classification 1 P(y = 1If(x)) = 1 + exp( - f(x)) and multiple-class classification. Here, y is a discrete variable with C states and . exp (h(x)) P(y = ~Ih(x), ... , fdx)) = ~c;;-----"---'-'-----'----'--'-­ Lj=l exp (/i(x)) (4) Note, that for multiple-class classification C Gaussian processes h (x), ... , f c (x) are used. Generalized Gaussian processes are discusses in Tresp (2000). The special case of classification was discussed by Williams and Barber (1998) from a Bayesian perspective. The related smoothing splines approaches are discussed in Fahrmeir and Tutz (1994). For generalized Gaussian processes, the optimization of the cost function is based on an iterative Fisher scoring procedure. Incidentally, the support vector machine (SVM) can also be considered to be a generalized Gaussian process model with P(ylf(x)) ex exp (-const(l- yf(x))+). Here, y E {-I, I}, the operation 0+ sets all negative values equal to zero and const is a constant (Sollich (2000» .1 The SVM cost function is particularly interesting since due to its discontinuous first derivative, many components of the optimal weight vector w are zero, i.e. we obtain sparse solutions. 4 Mixtures of Gaussian Processes GPR employs a global scale parameter s. In many applications it might be more desirable to permit an input-dependent scale parameter: the complexity of the map might be input dependent or the input data density might be nonuniform. In the latter case one might want to use a smaller scale parameter in regions with high data density. This is the main motivation for introducing another generalization of the simple GPR model, the mixture of Gaussian processes (MGP) model, which is a variant of the mixture of experts model of Jacobs et al. (1991). Here, a set of GPR models with different scale parameters is used and the system can autonomously decide which GPR model is appropriate for a particular region of input space. Let P'(x) = {ff.'(x), ... , ff{(x)} denote this set of M GPR models. The state ofa discrete M -state variable z determines which of the GPR models is active for a given input x. The state of z is estimated by an M -class classification Gaussian process model with P(z = iIFZ(x)) = exp (ft(x)) L~l exp (Jj(x)) where FZ(x) = {f{(x), ... , fit (x)} denotes a second set of M Gaussian processes. Finally, we use a set of M Gaussian processes FU (x) = {ff(x) , ... ,fM(X)} to model the input-dependent noise variance of the GPR models. The likelihood model given the state of z P(ylz, P'(x), FU (x)) = G (yj ff'(x), exp(2g (x))) is a Gaussian centered at ff(x) and with variance (exp(2J:(x))). The exponential is used to ensure positivity. Note that G(aj b, c) is our notation for a Gaussian density with mean b, variance c, evaluated at a. In the remaining parts of the paper we will not denote the lProperly normalizing the conditional probability density is somewhat tricky and is discussed in detail in Sollich (2000). dependency on the Gaussian processes explicitely, e.g we will write P(Ylz, x) instead of P(Ylz, F/1-(x), FU (x)). Since z is a latent variable we obtain with M M P(Ylx) = LP(z = ilx) G(Yjft(x),exp(2ff(x))) E(Ylx) = L P(z = ilx) ft(x) i=l i=l the well known mixture of experts network of Jacobs et al (1991) where the Jt(x) are the (Gaussian process) experts and P(z = ilx) is the gating network. Figure 2 (left) illustrates the dependencies in the GPR model. 4.1 EM Fisher Scoring Learning Rules Although a priori the functions f are Gaussian distributed, this is not necessarily true -in contrast to simple GPR in Section 2- for the posterior distribution due to the nonlinear nature of the model. Therefore one is typically interested in the minimum of the negative logarithm of the posterior density N M - L log L P(z = ilxk) G (Ykj ft(Xk), exp(2ff(xk))) k=l i=l 1M 1M 1M +2 LU:,m)'('E:,m)-l f:,m+ 2 Lut,m)'('Er,m)-l ft,m+ 2 LU;,m)'('Ef,m)-l f;,m. i=l i=l i=l The superscript m denotes the vectors and matrices defined at the measurement point, e.g. ft ,m = Ut(X1), . .. , ft(XN))'. In the E-step, based on the current estimates of the Gaussian processes at the data points, the state of the latent variable is estimated as In the M-step, based on the E-step, the Gaussian processes at the data points are updated. We obtain !t'm = 'Er,m ('Er,m + wr,m) -1 ym where wr,m is a diagonal matrix with entries (Wr,mhk = exp(2!f(xk))/ F(z = ilxk, Yk). Note, that data with a small F(z = ilxk,Yk) obtain a small weight. To update the other Gaussian processes iterative Fisher scoring steps have to be used as shown in the appendix. There is a serious problem with overtraining in the MGP approach. The reason is that the GPR model with the highest bandwidth tends to obtain the highest weight in the E-step since it provides the best fit to the data. There is an easy fix for the MGP: For calculating the responses of the Gaussian processes at Xk in the E-step we use all training data except (Xk, Yk). Fortunately, this calculation is very cheap in the case of Gaussian processes since for example 1_/1-( ) _ Yk - it(Xk) i Xk - Yk 1- Si,kk where it(Xk) denotes the estimates at the training data point Xk not using (Xk, Yk) . Here, Si,kk is the k-th diagonal element of Si = 'Er,m ('Er,m + wr,m)-1.2 2 See Hofmann (2000) for a discussion of the convergence of this type of algorithms. 0.5 ~/ vpt ./ \ . / / 0.5 ~ III ti 0 i.: -0.5 -1 / / (\ V I 'JllA I I / .Ii ~ Q) / > c 0 0 u :( il. -0.5 I I I I I -1 -2 -1 0 2 -2 -1 0 2 x x 0.8 0.5 ~0.6 c.. ·Ii (9 0 a:- 0.4 :2 0.2 -0.5 0 -1 -2 -1 0 2 -2 -1 0 2 x x Figure 1: The input data are generated from a Gaussian distribution with unit variance and mean O. The output data are generated from a step function (0, bottom right). The top left plot shows the map formed by three GPR models with different bandwidths. As can be seen no individual model achieves a good map. Then a MGP model was trained using the three GPR models. The top right plot shows the GPR models after convergence. The bottom left plot shows P{z = ilx). The GPR model with the highest bandwidth models the transition at zero, the GPR model with an intermediate bandwidth models the intermediate region and the GPR model with the lowest bandwidth models the extreme regions. The bottom right plot shows the data 0 and the fit obtained by the complete MGP model which is better than the map formed by any of the individual GPR models. 4.2 Experiments Figure 1 illustrates how the MGP divides up a complex task into subtasks modeled by the individual GPR models (see caption). By dividing up the task, the MGP model can potentially achieve a performance which is better than the performance of any individual model. Table 1 shows results from artificial data sets and real world data sets. In all cases, the performance of the MGP is better than the mean performance of the GPR models and also better than the performance of the mean (obtained by averaging the predictions of all GPR models). 5 Gaussian Processes for Graphical Models Gaussian processes can be useful models for quantifying the dependencies in Bayesian networks and dependency networks (the latter were introduced in Hofmann and Tresp, 1998, Heckerman et ai., 2000), in particular when parent variables are continuous quantities. If the child variable is discrete, Gaussian process classification or the SVM are appropriate models whereas when the child variable is continuous, the MGP model can be employed as a general conditional density estimator. Typically one would require that the continuous input variables to the Gaussian process systems x are known. It might therefore be Table 1: The table shows results using artificial and real data sets of size N = 100 using M = 10 GPR models. The data set ART is generated by adding Gaussian noise with a standard deviation of 0.2 to a map defined by 5 normalized Gaussian bumps. numin is the number of inputs. The bandwidth s was generated randomly between 0 and max. s. Furthermore, mean peif. is the mean squared test set error of all GPR networks and peif. of mean is the mean squared test set error achieved by simple averaging the predictions. The last column shows the performance of the MGP. I Data I numin I max. s I mean perf. I perf. of mean I MGP ART 1 1 0.0167 0.0080 0.0054 ART 2 3 0.0573 0.0345 0.0239 ART 5 6 0.1994 0.1383 0.0808 ART 10 10 0.1670 0.1135 0.0739 ART 20 20 0.1716 0.1203 0.0662 HOUSING 13 10 0.4677 0.3568 0.2634 BUPA 6 20 0.9654 0.9067 0.8804 DIABETES 8 40 0.8230 0.7660 0.7275 WAVEFORM 21 40 0.6295 0.5979 0.4453 useful to consider those as exogenous variables which modify the dependencies in a graphical model of y-variables as shown in Figure 2 (right). As an example consider a medical domain in which a Bayesian network of discrete variables y models the dependencies between diseases and symptoms and where these dependencies are modified by exogenous (often continuous) variables x representing quantities such as the patient's age, weight or blood pressure. Another example would be collaborative filtering where y might represent a set of goods and the correlation between customer preferences is modeled by a dependency network as in Heckerman et al. (2000). Here, exogenous variables such as income, gender and social status might be useful quantities to modify those correlations. Note, that the GPR model itself can also be considered to be a graphical model with dependencies modeled as Gaussian processes (compare Figure 2). Readers might also be interested in the related and independent paper by Friedman and Nachman (2000) in which those authors used GPR systems (not in form of the MGP) to perform structural learning in Bayesian networks of continuous variables. 6 Conclusions We demonstrated that Gaussian processes can be useful building blocks for forming complex probabilistic models. In particular we introduced the MGP model and demonstrated how Gaussian processes can model the dependencies in graphical models. 7 Appendix For r and r the mode estimates are found by iterating Newton-Raphson equations JCHl) = jO) iI-I (l)J(l) where J(l) is the Jacobian and iI(l) the Hessian matrix for which certain interactions are ignored. One obtains for (1 = 1,2, ... ) the following update equations. f'z ,m,(I+I) = ~z,m (~z , m + WZ ,m,(I)) -1 (Wz ,m,(I)dz ,m,(I) + f'z ,m,o)) where 'I. 'I. 'I. 'I. 'l. 'I. t z m (I) ( , '(I)) N di " = P(z = ilxk' Yk) - P (z = ilxk) k=I ' w;,m,(I) = diag ([P(I\z = ilxk)(l - P(l)(z = ilxk))]-1 (=1 . Figure 2: Left: The graphical structure of an MGP model consisting of the discrete latent variable z, the continuous variable y and input variable x. The probability density of z is dependent on the Gaussian processes F Z. The probability distribution of y is dependent on the state of z and of the Gaussian processes FI' , FO". Right: An example of a Bayesian network which contains the variables Y1, Y2, Y3, Y4. Some of the dependencies are modified by x via Gaussian processes it, /2, h. Similarly, J;,m,(/+tl = ~f , m (~f ,m + ~f , m'(/l) -1 Ge _ tf;f,m,(ll + J;,m,(I)) where e is an N-dimensional vector of ones and References [1] Jacobs, R. A., Jordan, M. I., Nowlan, S. J., Hinton, J. E. (1991). Adaptive Mixtures of Local Experts, Neural Computation, 3. [2] Tresp, V. (2000). The Generalized Bayesian Committee Machine. Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD-2000. [3] Williams, C. K. I., Barber. D. (1998). Bayesian Classification with Gaussian Processes, IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(12). [4] Fahrmeir, L., Tutz, G. (1994) Multivariate Statistical Modeling Based on Generalized Linear Models, Springer. [5] Sollich, P. (2000). Probabilistic Methods for Support Vector Machines. In Solla, S. A., Leen, T. K., Miiller, K.-R. (Eds.), Advances in Neural Information Processing Systems 12, MIT Press. [6] Hofmann R. (2000). Lemen der Struktur nichtlinearer Abhiingingkeiten mit graphischen Modellen. PhD Dissertation. [7] Hofmann, R., Tresp, V. (1998). Nonlinear Markov Networks for Continuous Variables. In Jordan, M. I., Kearns, M. S., Solla, S. A., (Eds.), Advances in Neural Information Processing Systems 10, MIT Press. [8] Heckerman, D., Chickering, D., Meek, c., Rounthwaite, R., Kadie C. (2000). Dependency Networks for Inference, Collaborative Filtering, and Data Visualization .. Journal of Machine Learning Research, 1. [9] Friedman, N., Nachman, I. (2000). Gaussian Process Networks. In Boutilier, C., Goldszmidt, M., (Eds.), Proc. Sixteenth Conf. on Uncertainty in Artificial Intelligence (UAl).
2000
88
1,892
What can a single neuron compute? Blaise Agiiera y Areas, l Adrienne L. Fairhall,2 and William Bialek2 1 Rare Books Library, Princeton University, Princeton, New Jersey 08544 2NEC Research Institute, 4 Independence Way, Princeton, New Jersey 08540 blaisea@prineeton. edu {adrienne, bialek} @researeh. nj. nee. com Abstract In this paper we formulate a description of the computation performed by a neuron as a combination of dimensional reduction and nonlinearity. We implement this description for the HodgkinHuxley model, identify the most relevant dimensions and find the nonlinearity. A two dimensional description already captures a significant fraction of the information that spikes carry about dynamic inputs. This description also shows that computation in the Hodgkin-Huxley model is more complex than a simple integrateand-fire or perceptron model. 1 Introduction Classical neural network models approximate neurons as devices that sum their inputs and generate a nonzero output if the sum exceeds a threshold. From our current state of knowledge in neurobiology it is easy to criticize these models as oversimplified: where is the complex geometry of neurons, or the many different kinds of ion channel, each with its own intricate multistate kinetics? Indeed, progress at this more microscopic level of description has led us to the point where we can write (almost) exact models for the electrical dynamics of neurons, at least on short time scales. These nearly exact models are complicated by any measure, including tens if not hundreds of differential equations to describe the states of different channels in different spatial compartments of the cell. Faced with this detailed microscopic description, we need to answer a question which goes well beyond the biological context: given a continuous dynamical system, what does it compute? Our goal in this paper is to make this question about what a neuron computes somewhat more precise, and then to explore what we take to be the simplest example, namely the Hodgkin- Huxley model [1],[2] (and refs therein). 2 What do we mean by the question? Real neurons take as inputs signals at their synapses and give as outputs sequences of discrete, identical pulses-action potentials or 'spikes'. The inputs themselves are spikes from other neurons, so the neuron is a device which takes N '" 103 pulse trains as inputs and generates one pulse train as output. If the system operates at 2 msec resolution and the window of relevant inputs is 20 msec, then we can think of a single neuron as having an input described by a '" x 104 bit word-the presence or absence of a spike in each 2 msec bin for each presynaptic cell-which is then mapped to a one (spike) or zero (no spike). More realistically, if the average spike rates are'" 10 sec-1, the input words can be compressed by a factor of ten. Thus we might be able to think about neurons as evaluating a Boolean function of roughly 1000 Boolean variables, and then characterizing the computational function of the cell amounts to specifying this Boolean function. The above estimate, though crude, makes clear that there will be no direct empirical attack on the question of what a neuron computes: there are too many possibilities to learn the function by brute force from any reasonable set of experiments. Progress requires the hypothesis that the function computed by a neuron is not arbitrary, but belongs to a simple class. Our suggestion is that this simple class involves functions that vary only over a low dimensional subspace of the inputs, and in fact we will start by searching for linear subspaces. Specifically, we begin by simplifying away the spatial structure of neurons and take inputs to be just injected currents into a point- like neuron. While this misses some of the richness in real cells, it allows us to focus on developing our computational methods. Further, it turns out that even this simple problem is not at all trivial. If the input is an injected current, then the neuron maps the history of this current, I(t < to), into the presence or absence of a spike at time to. More generally we might imagine that the cell (or our description) is noisy, so that there is a probability of spiking P[spike@toII(t < to)] which depends on the current history. We emphasize that the dependence on the history of the current means that there still are many dimensions to the input signal even though we have collapsed any spatial variations. If we work at time resolution flt and assume that currents in a window of size T are relevant to the decision to spike, then the inputs live in a space of D = T / flt, of order 100 dimensions in many interesting cases. If the neuron is sensitive only to a low dimensional linear subspace, we can define a set of signals S1, S2,···, SK by filtering the current, s,.. = 1 00 dtf,..(t)I(to - t), (1) so that the probability of spiking depends only on this finite set of signals, P[spike@toII(t < to)] = P[spike@to]g(s1,s2,· .. ,SK), (2) where we include the average probability of spiking so that 9 is dimensionless. If we think of the current I(t < to) as a vector, with one dimension for each time sample, then these filtered signals are linear projections of this vector. In this formulation, characterizing the computation done by a neuron means estimating the number of relevant stimulus dimensions (K, hopefully much less than D), identifying the filters which project into this relevant subspace,! and then characterizing the nonlinear function g(8). The classical perceptron- like cell of neural network theory has only one relevant dimension and a simple form for g. 3 Identifying low-dimensional structure The idea that neurons might be sensitive only to low-dimensional projections of their inputs was developed explicitly in work on a motion sensitive neuron of the fly visual system [3]. Rather than looking at the distribution P[spike@tols(t < to)], with s(t) the input signal (velocity of motion across the visual field in [3]), that work considered the distribution of signals conditional on the response, P[s(t < to)lspike@to]; these are related by Bayes' rule, P[spike@tols(t < to)] = P[s(t < to)lspike@to] (3) ______ ----'P=-[=sp=ik=e:..::@=to]P[s(t<to)] lNote that the individual filters don't really have any meaning; what is meaningful is the projection operator that is formed by the whole set of these filters. Put another way, the individual filters specify both a K - dimensional subspace and a coordinate system on this subspace, but there is no reason to prefer one coordinate system over another. Within the response conditional ensemble P[s(t < to)lspike@to] we can compute various moments. Thus the spike triggered average stimulus, or reverse correlation function [4], is the first moment ST A(T) = j [ds] P[s(t < to)lspike@to]s(to - T). (4) We can also compute the covariance matrix of fluctuations around this average, Cspike(T,T') = j[dS] P[s(t < to)lspike@to]s(to-T)s(to-T')-STA(T)STA(T'). (5) In the same way that we compare the spike triggered average to some constant average level ofthe signal (which we can define to be zero) in the whole experiment, we want to compare the covariance matrix Cspike with the covariance of the signal averaged over the whole experiment, Cprior(T,T') = j[dS] P[s(t < to)]s(to - T)S(tO - T'). (6) Notice that all of these covariance matrices are D x D in size. The surprising finding of [3] was that the change in the covariance matrix, t1C = Cs ike - Cprior, had only a very small number of nonzero eigenvalues. In fact it can be shown that if the probability of spiking depends on K linear projections of the stimulus as in eq. (2), and if the inputs s(t) are chosen from a Gaussian distribution, then the rank of the matrix t1C is exactly K. Further, the eigenvectors associated with nonzero eigenvalues span the relevant subspace (up to a rotation associated with the autocorrelations in the inputs. Thus eigenvalue analysis of the spike triggered covariance matrix gives us a direct way to search for a low dimensional linear subspace that captures the relevant stimulus features. 4 The Hodgkin-Huxley model We recall the details of the Hodgkin-Huxley model and note some special features that guide our analysis. Hodgkin and Huxley [1] modeled the dynamics of the current through a patch of membrane by flow through ion-specific conductances: dV I(t) = Cdt + 9Kn4 (V - VK) + 9Nam3h (V - VNa) + 91 (V - VI), (7) where K and N a subscripts denote potassium- and sodium-related variables, respectively, and l (for 'leakage') terms are a catch-all for other ion species with slower dynamics. C is the membrane capacitance. The subscripted voltages VI and VNa are ion-specific reversal potentials. 91, 9K and 9Na are empirically determined maximal conductances for the different ions,2 and the gating variables n, m and h (on the interval [0,1]) have their own voltage dependent dynamics: dn/dt = dm/dt = dh/dt = (O.OlV + 0.1)(1 - n) exp( -O.lV) - 0.125n exp(V/80) (0.1 V + 2.5)(1- m) exp( -0.1 V - 1.5) - 4m exp(V/18) 0.07(1 - h) exp(0.05V) - h exp( -0.1 V - 4), with V in m V and t in msec. (8) Here we are interested in dynamic inputs I(t), but it is important to remember that for constant inputs the Hodgkin-Huxley model undergoes a Hopf bifurcation to spike at a constant frequency; further, this frequency is rather insensitive to the precise value of the input above onset. This 'rigidity' of the system is felt also in 2We have used the original parameters, with a sign change for voltages: C = lJ.tF /cm2, gK = 36mU/cm2, gNa = 120mU/cm2, gl = O.3mU/cm2, VK = -12mV, VNa = +115mV, Vi = +10.613mV. We have taken our system to be a 7r x 302J.tm 2 patch of membrane. many regimes of dynamic stimulation, and can be thought of as a strong interaction among successive spikes. These interactions lead to long memory times, reflecting the infinite phase memory of the periodic orbit which exists for constant input. While spike interactions are interesting, we want to focus on the way that input current modulates the probability of spiking. To separate these effects we consider only 'isolated' spikes. These are defined by accumulating the interspike interval distribution and noticing that for some intervals t > tc the distribution decays exponentially, which means that the system has lost memory of the previous spike; thus spikes which are more than tc after the previous spike are isolated. In what follows we consider the response of the Hodgkin- Huxley model to currents I(t) with zero mean, 0.275 nA standard deviation, and 0.5 msec correlation time. 5 How many dimensions? Fig. 1 shows the change in covariance matrix f1C( r, r') for isolated spikes in our HH simulation, and fig. 2(a) shows the resulting spectrum of eigenvalues as a function of sample size. The result strongly suggests that there are many fewer than D relevant dimensions. In particular, there seem to be two outstanding modes; the STA itself lies largely in the subspace of these modes, as shown in Fig. 2(b). 0.01 ~ 0.00 S ~ t' ({l\sec) Figure 1: The isolated spike triggered covariance matrix f1C(r,r'). The filters themselves, shown in fig. 3, have simple forms; in particular the second mode is almost exactly the derivative of the first. If the neuron filtered its inputs and generated a spike when the output of the filter crosses threshold, we would find that there are two significant dimensions, corresponding to the filter and its derivative. It is tempting to suggest, then, that this is a good approximation to the HH model, but we will see that this is not correct. Notice also that both filters have significant differentiating components- the cell is not simply integrating its inputs. Although fig. 2(a) suggests that two modes dominate, it also demonstrates that the smaller nonzero eigenvalues of the other modes are not just noise. The width of any spectral band of eigenvalues near zero due to finite sampling should decline with increasing sample size. However, the smaller eigenvalues seen in fig. 2(a) are stable. Thus while the system is primarily sensitive to two dimensions, there is something 02 (a) 10+3 10+4 10+5 10+6 number of spikes accu mulated 0.5 2 .§ 0.0 Cl'=O_ iQ- =-- - _ _ """'_ 13 OJ "e"-0.5 -1.0 1 (b) 20 Figure 2: (a) Convergence ofthe largest 32 eigenvalues of the isolated spike triggered covariance with increasing sample size_ (b) Projections of the isolated STA onto the covariance modes_ eigenmodes 1 and 2 ........ normalized derivative of mode 1 -30 -25 -20 Figure 3: Most significant two modes of the spike-triggered covariance_ missing in this picture. To quantify this, we must first characterize the nonlinear function g(81' 82). 6 Nonlinearity and information At each instant of time we can find the relevant projections of the stimulus 81 and 82. By construction, the distribution of these signals over the whole experiment, P(81, 82), is Gaussian. On the other hand, each time we see a spike we get a sample from the distribution P(81' 82Ispike@to), leading to the picture in fig. 4. The prior and spike conditional distributions clearly are better separated in two dimensions than in one, which means that our two dimensional description captures more than the spike triggered average. Further, we see that the spike conditional distribution is curved, unlike what we would expect for a simple thresholding device. Combining eq's. (2) and (3), we have ( ) _ P(81,82Ispike@to) 9 81, 82 P( ) , 81,82 (9) so that these two distributions determine the input/output relation of the neuron in this 2D space. We emphasize that although the subspace is linear, 9 can have arbitrary nonlinearity. Fig. 4 shows that this input/output relation has sharp edges, but also some fuzziness. The HH model is deterministic, so in principle the input/output relation should be a c5 function: spikes occur only when certain exact conditions are met. Of course we have blurred things a bit by working at finite time -w 2 o ~ .~ "0 "E a '" "0 ~ :§.. N en -2 -4 4 ~ a 2 s, (standard deviations) Figure 4: 104 spike-conditional stimuli projected along the first 2 covariance modes. The circles represent the cumulative radial integral of the prior distribution from 00; the ring marked 10-4, for example, encloses 1 - 10-4 of the prior. resolution. Given that we work at finite llt, spikes carry only a finite amount of information, and the quality of our 2D approximation can be judged by asking how much of this information is captured by this description. As explained in [5], the arrival time of a single spike provides an information lonespike = ( r~) log2 [r~)] ), (10) where r(t) is the time dependent spike rate, f is the average spike rate, and ( . . . ) denotes an average over time. With a deterministic model like HH, the rate r(t) either is zero or corresponds to one spike occurring in one bin of size llt, that is r = l/11t. The result is that lonespike = -log2(fllt). On the other hand, if the probability of spiking really depends only on the stimulus dimensions 81 and 82, we can substitute r(t) P(81,82Ispike@t) -+ ---'--=::::-:-=--'--=----:---"f P(81,82)' (11) and use the ergodicity of the stimulus to replace time averages in Eq. (10). Then we find [3, 5] (12) If our two dimensional approximation were exact we would find l~~~s:pike = lone spike; more generally we will find 1~~~ss2pike ~ lone spike, and the fraction of the information we capture measures the quality of the approximation. This fraction is plotted in fig. 5 as a function of time resolution. For comparison, we also show the information captured by considering only the stimulus projection along the STA. -+- Covariance modes 1 and 2 (2D) 02 ~~----~~----~----~--~ 6 B 10 time discretization (msec) Figure 5: Fraction of spike timing information captured by STA (lower curve) and projection onto covariance modes 1 and 2 (upper curve). 7 Discussion The simple, low-dimensional model described captures a substantial amount of information about spike timing for a HH neuron. The fraction is maximal near bot = 5.5msec, reaching nearly 70%. However, the absolute information captured saturates for both the 1D and 2D cases, at RJ 3.5 and 5 bits respectively, for smaller bot. Hence the information fraction captured plummets; recovering precise spike timing requires a more complex, higher dimensional representation of the stimulus. Is this effect important, or is timing at this resolution too noisy for this extra complexity to matter in a real neuron? Stochastic HH simulations have suggested that, when realistic noise sources are taken into account, the timing of spikes in response to dynamic stimuli is reproducible to within 1- 2 msec [6]. This suggests that such timing details may indeed be important. Even in 2D, one can observe that the spike conditional distribution is curved (fig. 4); it is likely to curve along other dimensions as well. It may be possible to improve our approximation by considering the computation to take place on a low-dimensional but curved manifold, instead of a linear subspace. The curvature in Fig. 4 also implies that the computation in the HH model is not well approximated by an integrate and fire model, or a perceptron model limited to linear separations. Characterizing the complexity of the computation is an important step toward understanding neural systems. How to quantify this complexity theoretically is an area for future work; here, we have made progress toward this goal by describing such computations in a compact way and then evaluating the completeness of the description using information. The techniques presented are applicable to more complex models, and of course to real neurons. How does the addition of more channels increase the complexity of the computation? Will this add more relevant dimensions or does the non-linearity change? References [1] A. Hodgkin and A. Huxley. J. Physiol., 117, 1952. [2] C. Koch. Biophysics of computation. New York: Oxford University Press, 1999. [3] W. Bialek and R. de Ruyter van Steveninck. Proc. R. Soc. Lond. B, 234, 1988. [4] F. Rieke, D. Warland, R. de Ruyter van Steveninck and W. Bialek. Spikes: exploring the neural code. Cambridge, MA: MIT Press, 1997. [5] N. Brenner, S. Strong, R. Koberle, W. Bialek and R. de Ruyter van Steveninck. Neural Comp., 12, 2000. [6] E. Schneidman, R. Freedman and I. Segev. Neural Comp., 10, 1998.
2000
89
1,893
A tighter bound for graphical models M.A.R. Leisink* and H.J. Kappent Department of Biophysics University of Nijmegen, Geert Grooteplein 21 NL 6525 EZ Nijmegen, The Netherlands {martijn,bert}Cmbfys.kun.nl Abstract We present a method to bound the partition function of a Boltzmann machine neural network with any odd order polynomial. This is a direct extension of the mean field bound, which is first order. We show that the third order bound is strictly better than mean field. Additionally we show the rough outline how this bound is applicable to sigmoid belief networks. Numerical experiments indicate that an error reduction of a factor two is easily reached in the region where expansion based approximations are useful. 1 Introduction Graphical models have the capability to model a large class of probability distributions. The neurons in these networks are the random variables, whereas the connections between them model the causal dependencies. Usually, some of the nodes have a direct relation with the random variables in the problem and are called 'visibles'. The other nodes, known as 'hiddens', are used to model more complex probability distributions. Learning in graphical models can be done as long as the likelihood that the visibles correspond to a pattern in the data set, can be computed. In general the time it takes, scales exponentially with the number of hidden neurons. For such architectures one has no other choice than using an approximation for the likelihood. A well known approximation technique from statistical mechanics, called Gibbs sampling, was applied to graphical models in [1]. More recently, the mean field approximation known from physics was derived for sigmoid belief networks [2]. For this type of graphical models the parental dependency of a neuron is modelled by a non-linear (sigmoidal) function of the weighted parent states [3]. It turns out that the mean field approximation has the nice feature that it bounds the likelihood from below. This is useful for learning, since a maximisation of the bound either increases its accuracy or increases the likelihood for a pattern in the data set, which is the actual learning process. In this article we show that it is possible to improve the mean field approximation *http://www.mbfys.kun.nl/-martijn thttp://www.mbfys.kun.nl/-bert without losing the bounding properties. In section 2 we show the general theory to create a new bound using an existing one, which is applied to a Boltzmann machine in section 3. Boltzmann machines are another type of graphical models. In contrast with belief networks the connections are symmetric and not directed [4]. A mean field approximation for this type of neural networks was already described in [5]. An improvement of this approximation was found by Thouless, Anderson and Palmer in [6], which was applied to Boltzmann machines in [7]. Unfortunately, this so called TAP approximation is not a bound. We apply our method to the mean field approximation, which results in a third order bound. We prove that the latter is always tighter. Due to the limited space it is not possible to discuss the third order bound for sigmoid belief networks in much detail. Instead, we show the general outline and focus more on the experimental results in section 5. Finally, in section 6, we present our conclusions. 2 Higher order bounds Suppose we have a function 10 (x) and a bound bo (x) such that 't/ x 10 (x) ::::: bo (x) . Let It(x) and b1(x) be two primitive functions of lo(x) and bo(x) It(x) = J dx lo(x) and b1(x) = J dx bo(x) (1) such that It (v) = b1 (v) for some v. Note that we can always add an appropriate constant to the primitive functions such that they are indeed equal at x = v . Since the surface under 10 (x) at the left as well as at the right of x = v is obviously greater than the surface under bo(x) and the primitive functions are equal at x = v (by construction), we know { It (x) :s; b1 (x) for x :s; v It (x) ::::: b1 (x) for x ::::: v (2) or in shorthand notation It (x) ~ bt (x). It is important to understand that even if lo(v) > bo(v) the above result holds. Therefore we are completely free to choose v. If we repeat this and let 12 (x) and b2 (x) be two primitive functions of It (x) and bt{x), again such that h(v) = b2(v), one can easily verify that 't/x h(x) ::::: b2(x) . Thus given a lower bound of lo(x) we can create another lower bound. In case the given bound is a polynomial of degree k, the new bound is a polynomial of degree k + 2 with one additional variational parameter. To illustrate this procedure, we derive a third order bound on the exponential function starting with the well known linear bound: the tangent of the exponential function at x = v. Using the procedure of the previous section we derive 't/X ,V lo(x) = eX ::::: eV (1 + x - v) = bo(x) (3) It (x) = eX ~ el' + eV (( 1 + J1. - v) (x - J1.) + ~ (x - J1.) 2) = b1 (x) (4) 't/",I', >' h(x) = eX ::::: el' { 1 + x J1. + e>' C ~ A (x - J1.)2 + ~ (x - J1.)3) } = b2(x{5) with A = v J1.. 3 Boltzmann machines In this section we derive a third order lower bound on the partition function of a Boltzmann machine neural network using the results from the previous section. The probability to find a Boltzmann machine in a state i E {-I, +1}N is given by 1 1 (1.. . ) P (i) = Z exp (- E (i)) = Z exp 20'J Si Sj + 0' Si (6) There is an implicit summation over all repeated indices (Einstein's convention). Z = Lall s exp ( - E (i)) is the normalisation constant known as the partition function which requires a sum over all, exponentially many states. Therefore this sum is intractable to compute even for rather small networks. To compute the partition function approximately, we use the third order bound 1 from equation 5. We obtain where t::..E = J-L (i) + E . Note that the former constants J-L and A are now functions of i, since we may take different values for J-L and A for each term in the sum. In principle these functions can take any form. If we take, for instance, J-L (i) = - E (i) the approximation is exact. This would lead, however, to the same intractability as before and therefore we must restrict our choice to those that make equation 7 tractable to compute. We choose J-L (8) and A (8) to be linear with respect to the neuron states Si : (8) One may view J-L (i) and A (.S) as (the negative of) the energy functions for the Boltzmann distribution P ~ exp (J-L (i)) and P ~ exp (A (i)). Therefore we will sometimes speak of 'the distribution J-L (i)' . Since these linear energy functions correspond to factorised distributions, we can compute the right hand side of equation 7 in a reasonable time, 0 (N3 ). To obtain the tightest bound, we may maximise equation 7 with respect to its variational parameters J-LD, J-Li , AD and Ai . A special case of the third order bound Although it is possible to choose Ai f:. 0, we will set them to the suboptimal value Ai = 0, since this simplifies equation 7 enormously. The reader should keep in mind, however, that all calculations could be done with non-zero Ai . Given this choice we can compute the optimal values for J-LD and AD, given by (9) where (-) denotes an average over the (factorised) distribution J-L (i) . Using this solution the bound reduces to the simple form (10) lUsing the first order bound from equation 3 resuJts in the standard mean field bound. where ZI' is the partition function of the distribution fl (8). The term (t::..E2) corresponds to the variance of E + fli Si with respect to the distribution fl (8), since flo = - (E + fli si ). ).0 is proportional to the third order moment according to (9). Explicit expressions for these moments can be derived with patience. There is no explicit expression for the optimal fli as is the case with the standard mean field equations. An implicit expression, however, follows from setting the derivative with respect to fli to zero. We solved fli numerically by iteration. Wherever we speak of 'fully optimised', we refer to this solution for fli . Connection with standard Illean field and TAP We like to focus for a moment on the suboptimal case where fli correspond to the mean field solution, given by Vi mi ~f tanhfli = tanh (Oi + Oij mj ) (11) For this choice for fli the logZI' term in equation 10 is equal to the optimal mean field bound2 . Since the last term in equation 10 is always positive, we conclude that the third order bound is always tighter than the mean field bound. The relation between TAP and the third order bound is clear in the region of small weights. If we assume that 0 (Oij3) is negligible, a small weight expansion of equation 10 yields logZ ?: logZI' + log {I + ~eAO (t::..E2)} >;:J logZI' + ~Oij2 (1- m;) (1 - mj) 4 (12) where the last term is equal to the TAP correction term [7] . Thus the third order bound tends to the TAP approximation for small weights. For larger weights, however, the TAP approximation overestimates the partition function, whereas the third order approximation is still a bound. 4 Sigmoid belief networks In the previous section we saw how to derive a third order bound on the partition function. For sigmoid belief networks3 we can use the same strategy to obtain a third order bound on the likelihood of the visible neurons of the network to be in a particular state. In this article, we present the rough outline of our method. The full derivation will be presented elsewhere. It turns out that these graphical models are comparable to Boltzmann machines to a large extent. The energy function E(s) (as in equation 6), however, differs for sigmoid belief networks: -E(s) = Oijsi Sj + Oi Si - Llog2cosh (OPi Si + Op) (13) p The last term, known as the local normalisation, does not appear in the Boltzmann machine energy function. We have similar difficulties as with the Boltzmann machine, if we want to compute the log-likelihood given by loge = log L P (s) = log L exp (-E (8)) (14) sEHidden sEHidden 2Be aware of the fact that J.I (S) contains the parameter J.l0 = - (E + J.li Si). This gives an important contribution to the expression for log Z 1' . 3 A detailed description of these networks can be found in [3]. 80 §70 U c: .260 c: 0 ~50 <Il a. ~40 1il E ·§30 a. a. ~20 100 19 18 17 16 15 14 0 0.5 , , SD of the ~eights = 0"2 3 exact mean field tap third order 4 Figure 1: The exact partition function and three approximations: (1) Mean field, (2) TAP and (3) Fully optimised third order. The standard deviation of the thresholds is 0.1. Each point was averaged over a hundred randomly generated networks of 20 neurons. The inner plot shows the behaviour of the approximating functions for small weights. In contrast with the Boltzmann machine, we are not finished by using equation 7 to bound C. Due to the non-linear log 2 cosh term in the sigmoid belief energy, the so obtained bound is still intractable to compute. Therefore it is necessary to derive an additional bound such that the approximated likelihood is tractable to compute (this is comparable to the additional bound used in [2]). We make use of the concavity of the log function to find a straight line upper bound 4 : V ~ log x ::; eE x ~ - 1. We use this inequality to bound the log 2 cosh term in equation 13 for each p separately, where we choose ~p to be ~p (8') = ~p i Si + ~p. In this way we obtain a new energy function E (8') which is an upper bound on the original energy. It is obvious that the following inequalities hold C = L exp ( - E (8')) 2: L exp ( - E (8)) 2: B (E, /-l''\) (15) sEHidden sEHidden where the last inequality is equal, apart from the tilde, to equation 7. It turns out that this bound has a worst case computational complexity of 0 (N4 ), which makes it tractable for a large class of networks. 5 Results 5.1 Boltzmann machines In this section we compare the third order bound for Boltzmann machines with (1) the exact partition function, (2) the standard mean field bound and (3) the TAP approximation. Therefore we created networks of N = 20 neurons with thresholds drawn from a Gaussian with zero mean and 0"1 = 0.1 and weights drawn from a Gaussian with zero mean and standard deviation O"dVN, a so called sK-model [8]. 4This bound is also derivable using the method from section 2 with fo(x) = f,- 2": o. In figure 1 the exact partition function versus IJ'2 is shown. In the same figure the mean field and fully optimised third order bound are shown together with the TAP approximation. For large IJ'2 the exact partition function is linear in IJ'2, whereas this is not necessarily the case for small IJ'2 (see figure 1). In fact, in the absence of thresholds, the partition function is quadratic for small IJ'2. Since TAP is based on a Taylor expansion in the weights upto second order, it is very accurate in the small weight region. However, as soon as the size of the weights exceeds the radius of convergence of this expansion (this occurs approximately at IJ'2 = 1), the approximation diverges rapidly from the true value [9]. The mean field and third order approximation are both linear for large IJ'2, which prevents that they cross the true partition function and would violate the bound. In fact, both approximations are quite close to the true partition function. For small weights (1J'2 < 1), however, we see that the third order bound is much closer to the exact curved form than mean field is. 5.2 Sigmoid belief networks Mean field bound Third order bound j ~: ~V1"ble 2 4 0.6 Relative error (%) Figure 2: Histograms of the relative error for the toy network in the middle. The error of the third order bound is roughly ten times smaller than the error of the mean field bound. Although a full optimisation of the variational parameters gives the tightest bound, it turns out that the computational complexity of this optimisation is quite large for sigmoid belief networks. Therefore, we use the mean field solution for J-Li (equation 11) instead. This can be justified since the most important error reduction is due to the use of the third order bound. From experimental results not shown here it is clear that a full optimisation has a share of only a few percent in the total gain. To assess the error made by the various approaches, we use the same toy problem as in [2] and [10]. The network has a top layer of two neurons, a middle layer of four neurons and a lower layer of six visibles (figure 2). All neurons of two successive layers are connected with weights pointing downwards. Weights and thresholds are drawn from a uniform distribution over [-1,1].5 We compute the likelihood when all visibles are clamped to -1. Since the network is rather small, we can compute the exact likelihood to compare the lower bound with. In figure 2 a histogram of the relative error, I-log B / log £, is plotted for a thousand randomly generated networks. It is clear from the picture that for this toy problem the error is reduced by a factor ten. For larger weights, however, the effect is less, but still large enough to be valuable. For instance, if the weights are drawn from a uniform distribution over [-2,2], the error reduces by about a factor four on average and is always less than the mean field error. 5The original toy problem in [2] used a Oil-coding for the neuron activity. To be able to compare the results, we transform the weights and thresholds to the -l/+l-coding used in this article. 6 Conclusions We showed a procedure to find any odd order polynomial bound for the exponential function. A 2k -1 order polynomial bound has k variational parameters. For the third order bound these are J.I. and>". We can use this result to derive a bound on the partition function, where the variational parameters can be seen as energy functions for probability distributions. If we choose those distributions to be factorised, we have (N + l)k new variational parameters. Since the approximating function is a bound, we may maximise it with respect to all these parameters. In this article we restricted ourselves to the third order bound, although an extension to any odd order bound is possible. Third order is the next higher order bound to naive mean field. We showed that this bound is strictly better than the mean field bound and tends to the TAP approximation for small weights. For larger weights, however, the TAP approximation crosses the partition function and violates the bounding properties. We saw that the third order bound gives an enormous improvement compared to mean field. Our results are comparable to those obtained by the structured approach in [10]. The choice between third order and variational structures, however, is not exclusive. We expect that a combination of both methods is a promising research direction to obtain the tightest tractable bound. Acknowledgements This research is supported by the Technology Foundation STW, applied science devision of NWO and the technology programme of the Ministry of Economic Affairs. References [1) J. Pearl. Probabilistic Reasoning in Intelligent Systems, chapter 8.2.1, pages 387- 390. Morgan Kaufmann, San Francisco, 1988. [2) S.K. Saul, T.S. Jaakkola, and M.l. Jordan. Mean field theory for sigmoid belief networks. Technical Report 1, Computational Cognitive Science, 1995. [3) R. Neil. Connectionist learning of belief networks. Artificial intelligence, 56:71- 113, 1992. [4) D. Ackley, G. Hinton, and T. Sejnowski. A learning algorithm for Boltzmann machines. Cognitive Science, 9:147-169, 1985. [5) C. Peterson and J. Anderson. A mean field theory learning algorithm for neural networks. Complex systems, 1:995- 1019, 1987. [6) D.J. Thouless, P.W. Andersson, and R.G. Palmer. Solution of 'solvable model of a spin glass'. Philisophical Magazine, 35(3):593-601, 1977. [7) H.J . Kappen and F .B. Rodriguez. Boltzmann machine learning using mean field theory and linear response correction. In M.S. Kearns, S.A. Solla, and D.A. Cohn, editors, Advances in Neural Tnformation Processing Systems, volume 11, pages 280286. MIT Press, 1999. [8) D. Sherrington and S. Kirkpatrick. Solvable model of a spin-glass. Physical Review Letters, 35(26):1793- 1796,121975. [9) M.A.R. Leisink and H.J. Kappen. Validity of TAP equations in neural networks. In ICANN 99, volume 1, pages 425- 430, ISBN 0852967217, 1999. Institution of Electrical Engineers, London. [10) D. Barber and W. Wiegerinck. Tractable variational structures for approximating graphical models. In M.S. Kearns, S.A. Solla, and D.A. Cohn, editors, Advances in Neural Information Processing Systems, volume 11, pages 183- 189. MIT Press, 1999.
2000
9
1,894
Tree-Based Modeling and Estimation of Gaussian Processes on Graphs with Cycles Martin J. Wainwright, Erik B. Sudderth, and Alan S. Willsky Laboratory for Information and Decision Systems Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Cambridge, MA 02139 { mjwain, esuddert, willsky} @mit. edu Abstract We present the embedded trees algorithm, an iterative technique for estimation of Gaussian processes defined on arbitrary graphs. By exactly solving a series of modified problems on embedded spanning trees, it computes the conditional means with an efficiency comparable to or better than other techniques. Unlike other methods, the embedded trees algorithm also computes exact error covariances. The error covariance computation is most efficient for graphs in which removing a small number of edges reveals an embedded tree. In this context, we demonstrate that sparse loopy graphs can provide a significant increase in modeling power relative to trees, with only a minor increase in estimation complexity. 1 Introduction Graphical models are an invaluable tool for defining and manipulating probability distributions. In modeling stochastic processes with graphical models, two basic problems arise: (i) specifying a class of graphs with which to model or approximate the process; and (ii) determining efficient techniques for statistical inference. In fact, there exists a fundamental tradeoff between the expressive power of a graph, and the tractability of statistical inference. At one extreme are tree-structured graphs: although they lead to highly efficient algorithms for estimation [1, 2], their modeling power is often limited. The addition of edges to the graph tends to increase modeling power, but also introduces loops that necessitate the use of more sophisticated and costly techniques for estimation. In areas like coding theory, artificial intelligence, and speech processing [3, 1], graphical models typically involve discrete-valued random variables. However, in domains such as image processing, control, and oceanography [2, 4, 5], it is often more appropriate to consider random variables with a continuous distribution. In this context, Gaussian processes on graphs are of great practical significance. Moreover, the Gaussian case provides a valuable setting for developing an understanding of estimation algorithms [6, 7]. The focus of this paper is the estimation and modeling of Gaussian processes defined on graphs with cycles. We first develop an estimation algorithm that is based on exploiting trees embedded within the loopy graph. Given a set of noisy measurements, this embedded trees (ET) algorithm computes the conditional means with an efficiency comparable to or better than other techniques. Unlike other methods, the ET algorithm also computes exact error covariances at each node. In many applications, these error statistics are as important as the conditional means. We then demonstrate by example that relative to tree models, graphs with a small number of loops can lead to substantial improvements in modeling fidelity without a significant increase in estimation complexity. 2 Linear estimation fundamentals 2.1 Problem formulation Consider a Gaussian stochastic process x <'oJ N(O, P) that is Markov with respect to an undirected graph g. Each node in 9 corresponds to a subvector Xi of x. We will refer to Xi as the state variable for the ith node, and its length as the state dimension. By the Hammersley- Clifford Theorem [8], p- 1 inherits a sparse structure from g. If it is partitioned into blocks according to the state dimensions, the (i, j)fh block can be nonzero only if there is an edge between nodes i and j. Let y = ex + v, v <'oJ N(O, R), be a set of noisy observations. Without loss of generality, we assume that the subvectors Yi of the observations are conditionally independent given the state x. For estimation purposes, we are interested in p( Xi Iy), the marginal distribution of the state at each node conditioned on the noisy observations. Standard formulas exist for the computation of p(xIY) <'oJ N(x, P): x = peT R-1y P = [p-1 + eT R-1e] -1 (1) The conditional error covariances Pi are the block diagonal elements of the full error covariance P, where the block sizes are equal to the state dimensions. 2.2 Exploiting graph structure When 9 is tree structured, both the conditional means and error covariances can be computed by a direct and very efficient O(cF N) algorithm [2]. Here d is the maximal state dimension at any node, and N is the total number of nodes. This algorithm is a generalization of classic Kalman smoothing algorithms for time series, and involves passing means and covariances to and from a node chosen as the root. For graphs with cycles, calculating the full error covariance P by brute force matrix inversion would, in principle, provide the conditional means and error variances. Since the computational complexity of matrix inversion is O([dNP), this proposal is not practically feasible in many applications, such as image processing, where N may be on the order of 105 . This motivates the development of iterative techniques for linear estimation on graphs with cycles. Recently, two groups [6, 7] have analyzed Pearl's belief propagation [1] in application to Gaussian processes defined on loopy graphs. For Gaussians on trees, belief propagation produces results equivalent to the Kalman smoother of Chou et al. [2]. For graphs with cycles, these groups showed that when belief propagation converges, it computes the correct conditional means, but that error covariances are incorrect. The complexity per iteration of belief propagation on loopy graphs is O(d3 N), where one iteration corresponds to updating each message once. p-l P -l p-l K tree(l) = + 1 P -l p-l K tree(2) = + 2 Figure 1. Embedded trees produced by two different cutting matrices Ki for a nearest- neighbor grid (observation nodes not shown). It is important to note that conditional means can be efficiently calculated using techniques from numerical linear algebra [9]. In particular, it can be seen from equation (1) that computing the conditional mean iC is equivalent to computing the product of a matrix inverse and a vector. Given the sparsity of p-l, iterative techniques like conjugate gradient [9] can be used to compute the mean with associated cost O(dN) per iteration. However, like belief propagation, such techniques compute only the means and not the error covariances. 3 Embedded trees algorithm 3.1 Calculation of means In this section, we present an iterative algorithm for computing both the conditional means and error covariances of a Gaussian process defined on any graph. Central to the algorithm is the operation of cutting edges from a loopy graph to reveal an embedded tree. Standard tree algorithms [2] can be used to exactly solve the modified problem, and the results are used in a subsequent iteration. For a Gaussian process on a graph, the operation of removing edges corresponds to modifying the inverse covariance matrix. Specifically, we apply a matrix splitting p-l + CT R-1C = p-l _ K + CT R-1C tree(t) t where K t is a symmetric cutting matrix chosen to ensure that Pt~;e(t) corresponds to a valid tree-structured inverse covariance matrix. This matrix splitting allows us to consider defining a sequence of iterates {xn} by the recursion: [p-l + CTR-1C] ~n - K ~n-l + CTR-1 tree(t(n)) X t(n)x Y Here t(n) indexes the embedded tree used in the nth iteration. For example, Figure 1 shows two of the many spanning trees embedded in a nearest-neighbor grid. When the matrix (Pt~;e(t(n)) + CT R-1C) is positive definite, it is possible to solve for the next iterate xn in terms of data y and the previous iterate. Thus, given some starting point XO, we can generate a sequence of iterates {iCn } by the recursion ~ M-1 [K ~-l CTR-1 ] (2) x = t(n) t(n)X + y where Mt(n) ~ (Pt~;e(t(n)) + CT R-1C). By comparing equation (2) to equation (1), it can be seen that computing the nth iterate corresponds to a linear-Gaussian problem, which can be solved efficiently and directly with standard tree algorithms [2]. 3.2 Convergence of means Before stating some convergence results, recall that for any matrix A, the spectral radius is defined as p(A) ~ max>. 1>'1, where>. ranges over the eigenvalues of A. Proposition 1. Let x be the conditional mean ofthe original problem on the loopy graph, and consider the sequence of iterates {xn} generated by equation (2). Then for any starting point, x is the unique fixed point of the recursion, and the error en ::@, xn - x obeys the dynamics en = [IT Mt~:e(t(j))Kt(j)l eO (3) J=1 In a typical implementation of the algorithm, one cycles through the embedded trees in some fixed order, say t = 1, .. . , T. In this case, the convergence of the algorithm can be analyzed in terms of the product matrix A ::@, nJ=1 Mt~:e(j) Kj. Proposition 2. Convergence of the ET algorithm is governed by the spectral radius of A. In particular, if p(A) > 1, then the algorithm will not converge, whereas if p(A) < 1, then (xn - x) n~ 0 geometrically at rate 'Y ::@, p(A) 10. Note that the cutting matrices K must be chosen in order to guarantee not only that ~-;;e is tree-structured but also that M ::@, (Pt-;;e + C T R-1C) is positive definite. The following theorem, adapted from results in [10], gives conditions guaranteeing the validity and convergence of the ET algorithm when cutting to a single tree. Theorem 1. Define Q ::@, p-1 + CT R- 1C, and M ::@, Q + K. Suppose the cutting matrix K is symmetric and positive semidefinite. Then we are guaranteed that p(M- 1 K) < 1. In particular, we have the bounds: Am ax (K) :::; p(M-1 K) :::; Amax (K) (4) Amax(K) + Amax(Q) Amax(K) + Amin(Q) It should be noted that the conditions of this theorem are sufficient, but by no means necessary, to guarantee convergence of the ET algorithm. In particular, we find that indefinite cutting matrices often lead to faster convergence. Furthermore, Theorem 1 does not address the superior performance typically achieved by cycling through several embedded trees. Gaining a deeper theoretical understanding of these phenomena is an interesting open question. 3.3 Calculation of error covariances Although there exist a variety of iterative algorithms for computing the conditional mean of a linear-Gaussian problem, none of these methods correctly compute error covariances at each node. We show here that the ET algorithm can efficiently compute these covariances in an iterative fashion. For many applications (e.g., oceanography [5]), these error statistics are as important as the estimates. We assume for simplicity in notation that XO = 0 and then expand equation (2) to yield that for any iteration xn = [Fn + Mt(~)]CT R- 1y, where the matrix F n satisfies the recursion F n M-1 K [Fn- 1 M- 1 ] = t(n) t(n) + t(n-1) (5) with the initial condition F1 = o. It is straightforward to show that whenever the recursion for the conditional means in equation (2) converges, then the matrix sequence {Fn + Mt(~)} converges to the full error covariance P. Moreover, the cutting matrices K are typically of low rank, say O(E) where E is the number of cut edges. On this basis, it can be shown that each Fn can be Convergence of means 10' 1t-~-~~-r=:==O;==,'~C;====jl -+- Conj. Grad. Convergence of error variances 10' f;-~--~-r==: -+-~Ec::' mbC=e=;" dd;=ed~T;=re=jl e -+- Embedded Tree -<)- Behef Prop. 10 20 30 40 50 60 70 80 10 20 30 40 50 60 70 80 Iteration Iteration (a) (b) Figure 2. (a) Convergence rates for computing conditional means (normalized £2 error). (b) Convergence rate of ET algorithm for computing error variances. decomposed as a sum of O(E) rank 1 matrices. Directly updating this low-rank decomposition of F n from that of F n - 1 requires O(d3 E2 N) operations. However, an efficient restructuring of this update requires only O(d3 EN) operations [11]. The diagonal blocks of the low- rank representation may be easily extracted and added to the diagonal blocks of Mi(~), which are computed by standard tree smoothers. All together, we may obtain these error variances in O(d3 EN) operations per iteration. Thus, the computation of error variances will be particularly efficient for graphs where the number of edges E that must be cut is small compared to the total number of nodes N. 3.4 Results We have applied the algorithm to a variety of graphs, ranging from graphs with single loops to densely connected MRFs on grids. Figure 2(a) compares the rates of convergence for three algorithms: conjugate gradient (CG), embedded trees (ET), and belief propagation (BP) on a 20 x 20 nearest-neighbor grid. The ET algorithm employed two embedded trees analogous to those shown in Figure 1. We find that CG is usually fastest, and can exhibit supergeometric convergence. In accordance with Proposition 2, the ET algorithm converges geometrically. Either BP or ET can be made to converge faster, depending on the choice of clique potentials. However, we have not experimented with optimizing the performance of ET by adaptively choosing edges to cut. Figure 2(b) shows that in contrast to CG and BP, the ET algorithm can also be used to compute the conditional error variances, where the convergence rate is again geometric. 4 Modeling using graphs with cycles 4.1 Issues in Illodel design A variety of graphical structures may be used to approximate a given stochastic process. For example, perhaps the simplest model for a I-D time series is a Markov chain, as shown in Figure 3(a). However, a high order model may be required to adequately capture long-range correlations. The associated increase in state dimension leads to inefficient estimation. Figure 3(b) shows an alternative model structure. Here, additional "coarse scale" nodes have been added to the graph which are not directly linked to any measurements. These nodes are auxiliary variables created to explain the "fine scale" stochastic process of interest. If properly designed, the resulting tree structure 0= auxiliary nodes o = fine scale nodes • = observations rTT1 (a) 0 .8 0 .6 0 .4 0 .2 (d) (b) (e) (c) 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 (f) Figure 3. (a) Markov chain. (b) Multiscale tree model. (c) Tree augmented by extra edge. (d) Desired covariance P. (e) Error IP - ptr •• 1 between desired covariance and realized tree covariance. (f) Error IP - Hoopl between desired covariance and covariance realized with loopy graph. will capture long-range correlations without the increase in state dimension of a higher-order Markov model. In previous work, our group has developed efficient algorithms for estimation and stochastic realization using such multiscale tree models [2, 4, 5, 12]. The gains provided by multiscale models are especially impressive when quadtrees are used to approximate two-dimensional Markov random fields. While statistical inference on MRFs is notoriously difficult, estimation on quadtrees remains extremely efficient. The most significant weakness of tree models is boundary artifacts. That is, leaf nodes that are adjacent in the original process may be widely separated in the tree structure (see Figure 3(b)). As a result, dependencies between these nodes may be inadequately modeled, causing blocky discontinuities. Increasing the state dimension d of the hidden nodes will reduce blockiness, but will also reduce estimation efficiency, which is O(d3 N) in total. One potential solution is to add edges between pairs of fine scale nodes where tree artifacts are likely to arise, as shown in Figure 3(c). Such edges should be able to account for short-range dependency neglected by a tree model. Furthermore, optimal inference for such "near- tree" models using the ET algorithm will still be extremely efficient. 4.2 Application to Illultiscale Illodeling Consider a one-dimensional process of length 32 with exact covariance P shown in Figure 3(d). We approximate this process using two different graphical models, a multiscale tree and a "near-tree" containing an additional edge between two finescale nodes across a tree boundary (see Figure 3(c)). In both models, the state dimension at each node is constrained to be 2; therefore, the finest scale contains 16 nodes to model all 32 process points. Figure 3( e) shows the absolute error I P - Ptree I for the tree model, where realization was performed by the scale-recursive algorithm presented in [12]. The tree model matches the desired process statistics relatively well except at the center, where the tree structure causes a boundary artifact. Figure 3(f) shows the absolute error IP -l1oop l for a graph obtained by adding a single edge across the largest fine-scale tree boundary. The addition reduces the peak error by 60%, a substantial gain in modeling fidelity. If the ET algorithm is implemented by cutting to two different embedded trees, it converges extremely rapidly with rate 'Y = 0.11. 5 Discussion This paper makes contributions to both the estimation and modeling of Gaussian processes on graphs. First, we developed the embedded trees algorithm for estimation of Gaussian processes on arbitrary graphs. In contrast to other techniques, our algorithm computes both means and error covariances. Even on densely connected graphs, our algorithm is comparable to or better than other techniques for computing means. The error covariance computation is especially efficient for graphs in which cutting a small number of edges reveals an embedded tree. In this context, we have shown that modeling with sparsely connected loopy graphs can lead to substantial gains in modeling fidelity, with a minor increase in estimation complexity. From the results of this paper arise a number of fundamental questions about the trade-off between modeling fidelity and estimation complexity. In order to address these questions, we are currently working to develop tighter bounds on the convergence rate of the algorithm, and also considering techniques for optimally selecting edges to be removed. On the modeling side, we are expanding on previous work for trees [12] in order to develop a theory of stochastic realization for processes on graphs with cycles. Lastly, although the current paper has focused on Gaussian processes, similar concepts can be developed for discrete-valued processes. Acknowledgments This work partially funded by ONR grant NOOOI4-00-1-0089 and AFOSR grant F4962098-1-0349; M.W. supported by NSERC 1967 fellowship, and E.S. by NDSEG fellowship. References [1] J. Pear!. Probabilistic reasoning in intelligent systems. Morgan Kaufman, 1988. [2] K. Chou, A. Willsky, and R. Nikoukhah. Multiscale systems, Kalman filters, and Riccati equations. IEEE Trans. AC, 39(3}:479-492, March 1994. [3] R. G. Gallager. Low-density parity check codes. MIT Press, Cambridge, MA, 1963. [4] M. Luettgen, W. Karl, and A. Willsky. Efficient multiscale regularization with application to optical How. IEEE Trans. 1m. Proc., 3(1}:41-64, Jan. 1994. [5] P. Fieguth, W. Karl, A. Willsky, and C. Wunsch. Multiresolution optimal interpolation of satellite altimetry. IEEE Trans. Ceo. Rem., 33(2}:280- 292, March 1995. [6] P. Rusmevichientong and B. Van Roy. An analysis of turbo decoding with Gaussian densities. In NIPS 12, pages 575- 581. MIT Press, 2000. [7] Y. Weiss and W. T. Freeman. Correctness of belief propagation in Gaussian graphical models of arbitrary topology. In NIPS 12, pages 673- 679. MIT Press, 2000. [8] J. Besag. Spatial interaction and the statistical analysis of lattice systems. J. Roy. Stat. Soc. Series B, 36:192- 236, 1974. [9] J.W. Demme!. Applied numerical linear algebra. SIAM, Philadelphia, 1997. [10] O. Axelsson. Bounds of eigenvalues of preconditioned matrices. SIAM J. Matrix Anal. Appl., 13:847- 862, July 1992. [11] E. Sudderth, M. Wainwright, and A. Willsky. Embedded trees for modeling and estimation of Gaussian processes on graphs with cycles. In preparation, Dec. 2000. [12] A. Frakt and A. Willsky. Computationally efficient stochastic realization for internal multiscale autoregressive models. Mult. Sys. and Sig. Proc. To appear.
2000
90
1,895
Learning winner-take-all competition between groups of neurons in lateral inhibitory networks Xiaohui Xie, Richard Hahnloser and H. Sebastian Seung E25-21O, MIT, Cambridge, MA 02139 {xhxielrhlseung}@mit.edu Abstract It has long been known that lateral inhibition in neural networks can lead to a winner-take-all competition, so that only a single neuron is active at a steady state. Here we show how to organize lateral inhibition so that groups of neurons compete to be active. Given a collection of potentially overlapping groups, the inhibitory connectivity is set by a formula that can be interpreted as arising from a simple learning rule. Our analysis demonstrates that such inhibition generally results in winner-take-all competition between the given groups, with the exception of some degenerate cases. In a broader context, the network serves as a particular illustration of the general distinction between permitted and forbidden sets, which was introduced recently. From this viewpoint, the computational function of our network is to store and retrieve memories as permitted sets of coactive neurons. In traditional winner-take-all networks, lateral inhibition is used to enforce a localized, or "grandmother cell" representation in which only a single neuron is active [1, 2, 3, 4]. When used for unsupervised learning, winner-take-all networks discover representations similar to those learned by vector quantization [5]. Recently many research efforts have focused on unsupervised learning algorithms for sparsely distributed representations [6, 7]. These algorithms lead to networks in which groups of multiple neurons are coactivated to represent an object. Therefore, it is of great interest to find ways of using lateral inhibition to mediate winner-take-all competition between groups of neurons, as this could be useful for learning sparsely distributed representations. In this paper, we show how winner-take-all competition between groups of neurons can be learned. Given a collection of potentially overlapping groups, the inhibitory connectivity is set by a simple formula that can be interpreted as arising from an online learning rule. To show that the resulting network functions as advertised, we perform a stability analysis. If the strength of inhibition is sufficiently great, and the group organization satisfies certain conditions, we show that the only sets of neurons that can be coactivated at a stable steady state are the given groups and their subsets. Because of the competition between groups, only one group can be activated at a time. In general, the identity of the winning group depends on the initial conditions of the network dynamics. If the groups are ordered by the aggregate input that each receives, the possible winners are those above a cutoff that is set by inequalities to be specified. 1 Basic definitions Let m groups of neurons be given, where group membership is specified by the matrix fl = {I if the ith neuron is in the ath group , ° otherwise (1) We will assume that every neuron belongs to at least one group l, and every group contains at least one neuron. A neuron is allowed to belong to more than one group, so that the groups are potentially overlapping. The inhibitory synaptic connectivity of the network is defined in terms of the group membership, Ji ' = lIm (1 _ ~a ~'!) = {o if i and. j both belong to a group J , J 1 otherwIse a=I (2) One can imagine this pattern of connectivity arising by a simple learning mechanism. Suppose that all elements of J are initialized to be unity, and the groups are presented sequentially as binary vectors e, ... ,~m. The ath group is learned through the update (3) In other words, if neurons i and j both belong to group a, then the connection between them is removed. After presentation of all m groups, this leads to Eq. (2). At the start of the learning process, the initial state of J corresponds to uniform inhibition, which is known to implement winner-take-all competition between individual neurons. It will be seen that, as inhibitory connections are removed during learning, the competition evolves to mediate competition between groups of neurons rather than individual neurons. The dynamics of the network is given by dx [ ] + -' + x- = b- + ax- - (3 '" J- -xdt ' , , L...J 'J J j (4) where [z]+ = max{z,O} denotes rectification, a > ° the strength of self-excitation, and (3 > ° the strength of lateral inhibition. Equivalently, the dynamics can be written in matrix-vector form as :i; + x = [b + W x]+, where W = aI - (3J includes both self-excitation and lateral inhibition. The state of the network is specified by the vector x, and the external input by the vector b. A vector v is said to be nonnegative, v 2: 0, if all of its components are nonnegative. The nonnegative orthant is the set of all nonnegative vectors. It can be shown that any trajectory of Eq. (4) starting in the nonnegative orthant remains there. Therefore, for simplicity we will consider trajectories that are confined to the nonnegative orthant x 2: 0. However, we will consider input vectors b whose components are of arbitrary sign. 2 Global stability The goal of this paper is to characterize the steady state response of the dynamics Eq. (4) to an input b that is constant in time. For this to be a sensible goal, we need some guarantee that the dynamics converges to a steady state, and does not diverge to infinity. This is provided by the following theorem. Theorem 1 Consider the network Eq. (4). The following statements are equivalent: lThis condition can be relaxed, but is kept for simplicity_ 1. For any input b, there is a nonempty set of steady states that is globally asymptotically stable, exceptfor initial conditions in a set of measure zero. 2. The strength a of self-excitation is less than one. Proof sketch: • (2) => (1): Ifa < 1, the function Hl-a)xTx+~xT Jx-bTxis bounded below and radially unbounded in the nonnegative orthant. Furthermore it is nonincreasing under the dynamics Eq. (4), and constant only at steady states. Therefore it is a Lyapunov function, and its local minima are globally asymptotically stable . • (1) => (2): Suppose that (2) is false. If a ~ 1, it is possible to choose b and an initial condition for x so that only one neuron is active, and the activity of this neuron diverges, so that (1) is contradicted .• 3 Relationship between groups and permitted sets In this section we characterize the conditions under which the lateral inhibition of Eq. (4) enforces winner-take-all competition between the groups of neurons. That is, the only sets of neurons that can be coactivated at a stable steady state are the groups and their subsets. This is done by performing a linear stability analysis, which allows us to classify active sets using the following definition. Definition 1 If a set of neurons can be coactivated by some input at an asymptotically stable steady state, it is called permitted. Otherwise, it is forbidden Elsewhere we have shown that whether a set is permitted or forbidden depends on the submatrix of synaptic connections between neurons in that set[l]. If the largest eigenvalue of the sub-matrix is less than unity, then the set is permitted. Otherwise, it is forbidden. We have also proved that any superset of a forbidden set is forbidden, while any subset of a permitted set is also permitted. Our goal in constructing the network (4) is to make the groups and their subsets the only permitted sets of the network. To determine whether this is the case, we must answer two questions. First, are all groups and their subsets permitted? Second, are all permitted sets contained in groups? The first question is answered by the following Lemma. Lemma 1 All groups and their subsets are permitted. Proof: If a set is contained in a group, then there is no lateral inhibition between the neurons in the set. Provided that a < 1, all eigenvalues of the sub-matrix are less than unity, and the set is permitted .• The answer to the second question, whether all permitted sets are contained in groups, is not necessarily affirmative. For example, consider the network defined by the group membership matrix ~ = {(I, 1,0), (0, 1, 1), (1,0,1)}. Since every pair of neurons belongs to some group, there is no lateral inhibition (J = 0), which means that there are no forbidden sets. As a result, (1,1,1) is a permitted set, but obviously it is not contained in any group. Let's define a spurious permitted set to be one that is not contained in any group. For example, {I, 1, I} is a spurious permitted set in the above example. To eliminate all the spurious permitted sets in the network, certain conditions on the group membership matrix ~ have to be satisfied. Definition 2 The membership ~ is degenerate if there exists a set of n ~ 3 neurons that is not contained in any group, but all of its subsets with n - 1 neurons belong to some group. Otherwise, ~ is called nondegenerate. For example, ~ = {(I, 1, 0), (0, 1, 1), (1,0, I)} is degenerate. Using this definition, we can formulate the following theorem. Theorem 2 The neural dynamics Eq. (4) with a < 1 and (3 > 1 - a has a spurious permitted set if and only if ~ is degenerate. Before we prove this theorem, we will need the following lemma. Lemma 2 If (3 > 1- a, any set containing two neurons not in the same group isforbidden under the neural dynamics Eq. (4). Proof sketch: We will start by analyzing a very simple case, where there are two neurons belonging to two different groups. Let the group membership be {(I, 0), (0, I)}. In this case, W = {(a, -(3), (-(3, a)}. This matrix has eigenvectors (1,1) and (1, -1) and eigenvalues a - (3 and a + (3. Since a < 1 for global stability and (3 > 0 by definition, the (1,1) mode is always stable. But if (3 > 1 - a, the (1, -1) mode is unstable. This means that it is impossible for the two neurons to be coactivated at a stable steady state. Since any superset of a forbidden set is also forbidden, the general result of the lemma follows .•. Proof of Theorem 2 (sketch): • ¢::: If ~ is degenerate, there must exist a set n ~ 3 neurons that is not contained in any group, but all of its subsets with n - 1 neurons belong to some group. There is no lateral inhibition between these n neurons, since every pair of neurons belongs to some group. Thus the set containing all n neurons is permitted and spurious . • =>: If there exists a spurious permitted set P, we need to prove that ~ must be degenerate. We will prove this by contradiction and induction. Let's assume ~ is nondegenerate. P must contain at least 2 neurons since anyone neuron subset is permitted and not spurious. By Lemma 2, these 2 neurons must be contained in some group, or else it is forbidden. Thus P must contain at least 3 neurons to be spurious, and any pair of neurons in P belongs to some group by Lemma 2. If P contains at least n neurons and all of its subsets with n - 1 neurons belong to some group, then the set with these n neurons must belong to some group, otherwise ~ is degenerate. Thus n must contain at least n + 1 neurons to be spurious, and all its n subsets belong to some group. By induction, this implies that P must contain all neurons in the network, in which case, P is either forbidden or nonspurious. This contradicts with the assumption P is a spurious permitted set. • From Theorem 2, we can easily have the following result. Corollary 1 If every group contains some neuron that does not belong to any other group, then there is no any spurious permitted set. 4 The potential winners We have seen that if ~ is nondegenerate, the active set must be contained in a group, provided that lateral inhibition is strong «(3 > 1 - a). The group that contains the active set will be called the "winner" of the competition between groups. The identity of the winner depends on the input b, and also on the initial conditions of the dynamics. For a given input, we need to characterize which pattern could potentially be the winner. Suppose that the group inputs B a = Li [biJ + ~i are distinct. Without loss of generality, we order the group inputs as Bl > ... > Bm. Let's denote the largest input as bmax = maxi{bi} and assume bmax > 0. Theorem 3 For nonoverlapping groups, the top c groups with the largest group input could end up the winner depending on the initial conditions of the dynamics, where c is determined by the equation BC 2': (1 - a)(3-1bmax > B C+! Proof sketch: Suppose the ath group is the winner. For all neurons not in this group to be inactive, the self-consistent condition should read "'[ J+a I-a [J+ ~ bi ~i 2': -(3- max{ bj } i J~a (5) If a group containing the neuron with the largest input, this condition can always be satisfied. Moreover, this group is always in the top c groups. For groups not containing the neuron with the largest input, this condition can be satisfied if and only if they are in the top c groups .• The winner-take-all competition described above holds only for the case of strong inhibition (3 > 1 - a. On the other hand, if (3 is small, the competition will be weak and may not result in group-winner-take-all. In particular, if (3 < (1 - a) / Amax, where Am ax is the largest eigenvalue of -J, then the set of all neurons is permitted. Since every subset of a permitted set is permitted, that means there are no forbidden sets and the network is monostable. Hence, group-winner-take-all does not hold. If (1 - a) / Amax < (3 < 1 - a, the network has forbidden sets, but the possibility of spurious permitted sets cannot be excluded. 5 Examples Traditional winner-take-all network This is a special case of our network with N groups, each containing one of the N neurons. Therefore, the group membership matrix ~ is the identity matrix, and J = 11 T - I, where 1 denotes the vector of all ones. According to Corollary 1, only one neuron is permitted to be active at a stable steady state, provided that (3 > 1 - a. We refer to the active neuron as the "winner" of the competition mediated by the lateral inhibition. If we assume that the inputs bi have distinct values, they can be ordered as b1 > b2 > ... > bN , without loss of generality. According to Theorem 3, any of the neurons 1 to k can be the winner, where k is defined by bk 2': (1 - a)(3-1b1 > bk+!. The winner depends on the initial condition of the network dynamics. In other words, any neuron whose input is greater than (1 - a) / (3 times the largest input can end up the winner. Topographic organization Let the N neurons be organized into a ring, and let every set of d contiguous neurons be a group. d will be called the width. For example, in a network with N = 4 neurons and group width d = 2, then the membership matrix is ~ = {(I, 1,0,0), (0,1,1,0), (0,0,1,1), (1,0,0, I)}. This ring network is similar to the one proposed by Ben-Yishai et al in the modeling of orientation tuning of visual cortex[9]. Unlike the WTA network where all groups are non-overlapping which implies that ~ is always nondegenerate, in the ring network neurons are shared among different groups, ~ will become degenerate when the width of the group is large. To guarantee all permitted sets are the subsets of some group, we have the following corollary, which can be derived from Theorem 2. A D 10 15 L--_~_~ 50 100 150 200 100 200 300 400 C F 10 15L--_~_~ 10 15 Figure 1: Permitted sets of the ring network. The ring network is comprised of 15 neurons with Q = 0.4 and /3 = 1. In panels A and D, the 15 groups are represented by columns. Black refers to active neurons and white refers to inactive neurons. (A) 15 groups of width d = 5. (B) All permitted sets corresponding to the groups in A. (C) The 15 permitted sets in B that have no permitted supersets. They are the same as the groups in A. (D) 15 groups with width d = 6. (E) All permitted set corresponding to groups in D. (F) There are 20 permitted sets in E that have no permitted supersets. Note that there are 5 spurious permitted sets. Corollary 2 In the ring network with N neurons, if the width d < N /3 + 1, then there is no spurious permitted set. Fig. (1) shows the permitted sets of a ring network with 15 neurons. From Corollary 2, we know that if the group width is no larger than 5 neurons, there will not exist any spurious permitted set. In the left three panels of Fig. (1), the group width is 5 and all permitted sets are subsets of these groups. However, when the group width is 6 (right three panels), there exists 5 spurious permitted sets as shown in panel F. As we have mentioned earlier, the lateral inhibition strength (3 plays a critical role in determining the dynamics of the network. Fig. (2) shows four types of steady states of a ring network corresponding to different values of (3. 6 Discussion We have shown that it is possible to organize lateral inhibition to mediate a winner-take-all competition between potentially overlapping groups of neurons. Our construction utilizes the distinction between permitted and forbidden sets of neurons. If there is strong lateral inhibition between two neurons, then any set that contains them is forbidden (Lemma 2). Neurons that belong to the same group do not have any mutual inhibition, and so they form a permitted set. Because the synaptic connections between neurons in the same group are only composed of self-excitation, their outputs equal their rectified inputs, amplified by the gain factor of 1/(1 - a). Hence the neurons in the winning group operate in a purely analog regime. The coexistence of analog filtering with logical constraints on neural activation represents a form of hybrid analog-digital computation that may be especially appropriate for perceptual tasks. It might be possible to apply a similar method to the problem of data reconstruction using a constrained combination of basis vecset of basis vectors. The constraints on the linear tors could for example implement sparsity or nonA B negativity constraints. 1. As we have shown in Theorem 2, there are some degenerate cases of overlapping groups, to which our method does not apply. It is an interesting open question whether there exists a general way of how to translate arbitrary groups of coactive neurons into permitted sets without involving spurious permitted sets. In the past, a great deal of research has been inspired by the idea of storing memories as dynamical attractors in neural networks [10]. Our theory suggests an alternative viewpoint, which is to regard permitted sets as memories latent in the synaptic connections. From this viewpoint, the contribuo. o. c D 15 5 10 15 Figure 2: Lateral inhibition strength f3 determines the behavior of the network. The network is a ring network of 15 neurons with width d = 5 and where 0: = 0.4 and input bi = 1, Vi. These panels show the steady state activities of the 15 neurons. (A) There are no forbidden sets. (B) The marginal state f3 = (1 O:)/Amax = 0.874, in which the network forms a continuous attractor. (C) Forbidden sets exist, and so do spurious permitted sets. (D) Group-winner-take-a11 case, no spurious permitted sets. tion of the present paper is a method of storing and retrieving memories as permitted sets in neural networks. References [1] R. Hahnloser, R. Sarpeshkar, M. Mahowald, Douglas R., and H.S. Seung. Digital selection and analog amplification coexist in an electronic circuit inspired by neocortex. Nature, 3:609- 616, 2000. [2] Shun-Ichi Amari and Michael A. Arbib. Competition and Cooperation in Neural Nets, pages 119- 165. Systems Neuroscience. Academic Press, 1977. J. Metzler (ed). [3] 1. Feng and K.P. Hadeler. Qualitative behaviour of some simple networks. 1. Phys. A:, 29:50195033, 1996. [4] Richard H.R. Hahnloser. About the piecewise analysis of networks of linear threshold neurons. Neural Networks, 11:691- 697, 1998. [5] T. Kohonen. Self-Organization and Associative Memory. Springer-Verlag, Berlin, 3 edition, 1989. [6] D. D. Lee and H. S. Seung. Learning the parts of objects by nonnegative matrix factorization. Nature, 401:788- 91, 1999. [7] B. A. Olshausen and D. 1. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607-609, 1996. [8] R. Ben-Yishai, R. Lev Bar-Or, and H. Sompolinsky. Theory of orientation tuning in visual cortex. Proc. Natl. Acad. Sci. USA , 92:3844-3848, 1995. [9] 1. J. Hopfield. Neurons with graded response have collective properties like those of two-state neurons. Proc. Natl. Acad. Sci. USA, 81:3088- 3092, 1984.
2000
91
1,896
Foundations for a Circuit Complexity Theory of Sensory Processing* Robert A. Legenstein & Wolfgang Maass Institute for Theoretical Computer Science Technische Universitat Graz, Austria {Iegi, maass }@igi.tu-graz.ac.at Abstract We introduce total wire length as salient complexity measure for an analysis of the circuit complexity of sensory processing in biological neural systems and neuromorphic engineering. This new complexity measure is applied to a set of basic computational problems that apparently need to be solved by circuits for translation- and scale-invariant sensory processing. We exhibit new circuit design strategies for these new benchmark functions that can be implemented within realistic complexity bounds, in particular with linear or almost linear total wire length. 1 Introduction Circuit complexity theory is a classical area of theoretical computer science, that provides estimates for the complexity of circuits for computing specific benchmark functions, such as binary addition, multiplication and sorting (see, e.g. (Savage, 1998». In recent years interest has grown in understanding the complexity of circuits for early sensory processing, both from the biological point of view and from the point of view of neuromorphic engineering (see (Mead, 1989». However classical circuit complexity theory has provided little insight into these questions, both because its focus lies on a different set of computational problems, and because its traditional complexity measures are not tailored to those resources that are of primary interest in the analysis of neural circuits in biological organisms and neuromorphic engineering. This deficit is quite unfortunate since there is growing demand for energy-efficient hardware for sensory processing, and complexity issues become very important since the number n of parallel inputs which such circuits have to handle is typically quite large (for example n 2': 106 in the case of many visual processing tasks). We will follow traditional circuit complexity theory in assuming that the underlying graph of each circuit is a directed graph without cycles. l The most frequently considered complexity measures in traditional circuit complexity theory are the number (and types) of "Research for this article was partially supported by the the Fonds zur Forderung der wissenschaftlichen Forschung (FWF), Austria, project P12153, and the NeuroCOLT project of the EC. I Neural circuits in "wetware" as well as most circuits in analog VLSI contain in addition to feedforward connections also lateral and recurrent connections. This fact presents a serious obstacle for a direct mathematical analysis of such circuits. The standard mathematical approach is to model such circuits by larger feedforward circuits, where new "virtual gates" are introduced to represent the state of existing gates at later points in time. gates, as well as the depth of a circuit. The latter is defined as the length of the longest directed path in the underlying graph, and is also interpreted as the computation time of the circuit. The focus lies in general on the classification of functions that can be computed by circuits whose number of gates can be bounded by a polynomial in the number n of input variables. This implicitly also provides a polynomial- typically quite large - bound on the number of "wires" (defined as the edges in the underlying graph of the circuit). We proceed on the assumption that the area (or volume in the case of neural circuits) occupied by wires is a severe bottleneck for physical implementations of circuits for sensory processing. Therefore we wiJI not just count wires, but consider a complexity measure that provides an estimate for the total area or volume occupied by wires. In the cortex, neurons occupy an about 2 mm thick 3-dimensional sheet of "grey matter". There exists a strikingly general upper bound on the order of 105 for the number of neurons under any mm2 of cortical surface, and the total length of wires (axons and dendrites, including those running in the sheet of "white matter" that lies below the grey matter) under any mm2 of cortical surface is estimated to be ~ 8km = 8·106mm (Koch, 1999). Together this yields an upper bound of 8~~~6 n = 80 . n mm for the wire length of the "average" cortical circuit involving n neurons. In order to arrive at a concise mathematical model we project each 3D cortical circuit into 2D, and assume for simplicity that its n gates (neurons) occupy the nodes of a grid. Then for a circuit with n gates, the total length of the horizontal components of all wires is on average ~ 80 . n mm = 80 . n .105/ 2 ~ 25300· n grid units. Here, one grid unit is the distance between adjacent nodes on the grid, which amounts to 1O-5/ 2mm for an assumed density of 105 neurons per mm2 of cortical surface. Thus we arrive at a simple test for checking whether the total wire length of a proposed circuit design has a chance to be biologically realistic: Check whether you can arrange its n gates on the nodes of a grid in such a way that the total length of the horizontal components of all wires is ~ 25300 . n grid units. More abstractly, we define the following model: Gates, input- and output-ports of a circuit are placed on different nodes of a 2-dimensional grid (with unit distance 1 between adjacent nodes). These nodes can be connected by (unidirectional) wires that run through the plane in any way that the designer wants, in particular wires may cross and need not run rectilinearly (wires are thought of as running in the 3 dimensional .Ipace above the plane, without charge for vertical wire segmentsp. We refer to the minimal value of the sum of all wire lengths that can be achieved by any such arrangement as the total wire length of the circuit. The attractiveness of this model lies in its mathematical simplicity, and in its generality. It provides a rough estimate for the cost of connectivity both in artificial (basically 2dimensional) circuits and in neural circuits, where 2-dimensional wire crossing problems are apparently avoided (at least on a small scale) since dendritic and axonal branches are routed through 3-dimensional cortical tissue. There exist quite reliable estimates for the order of magnitudes for the number n of inputs, the number of neurons and the total wire length of biological neural circuits for sensory processing, see (Abeles, 1998; Koch, 1999; Shepherd, 1998; Braitenberg and Schiiz, 1998).3 2We will allow that a wire from a gate may branch and provide input to several other gates. For reasonable bounds on the maximal fan-out (104 in the case of neural circuits) this is realistic both for neural circuits and for VLSI. 3The number of neurons that transmit information from the retina (via the thalamus) to the cortex Collectively they suggest that only those circuit architectures for sensory processing are biologically realistic that employ a number of gates that is almost linear in the number n of inputs, and a total wire length that is quadratic or subquadratic - with the additional requirement that the constant factor in front of the asymptotic complexity bound has a value close to 1. Since most asymptotic bounds in circuit complexity theory have constant factors in front that are much larger than 1, one really has to focus on circuit architectures with clearly subquadratic bounds for the total wire length. The complexity bounds for circuits that can realistically be implemented in VLSI are typically even more severe than for "wetware", and linear or almost linear bounds for the total wire length are desirable for that purpose. In this article we begin the investigation of algorithms for basic pattern recognition tasks that can be implemented within this low-level complexity regime. The architecture of such circuits has to differ strongly from most previously proposed circuits for sensory processing, which usually involve at least 2 completely connected layers, since already complete connectivity between just two linear size 2-dimensionallayers of a feedforward neural net requires a total wire length on the order of n 5/ 2 . Furthermore a circuit which first selects a salient input segment consisting of a block of up to m adjacent inputs in some 2-dimensional map, and then sends this block of ~ m inputs in parallel to some central "pattern template matcher", typically requires a total wire length of O(n3/ 2 • m) - even without taking the circuitry for the "selection" or the template matching into account. 2 Global Pattern Detection in 2-Dimensional Maps For many important sensory processing tasks - such as for visual or somatosensory input - the input variables are arranged in a 2-dimensional map whose structure reflects spatial relationship in the outside world. We assume that local feature detectors are able to detect the presence of salient local features in their specific "receptive field", such as for example a center which emits is estimated to be around 106 (all estimates given are for primates, and they only reflect the order of magnitude). The total number of neurons that transmit sensory (mostly somatosensory) information to the cortex is estimated to be around 108 . In the subsequent sections we assume that these inputs represent the outputs of various local feature detectors for n locations in some 2-dimensional map. Thus, if one assumes for example that on average there are 10 different feature detectors for each location on this map, one arrives at biologically realistic estimates for n that lie between 105 and 107 . The total number of neurons in the primary visual cortex of primates is estimated to be around 109 , occupying an area of roughly 104 mm2 of cortical surface. There are up to 105 neurons under one mm2 of cortical surface, which yields a value of 10-5/ 2 mm for the distance between adjacent grid points in our model. The total length of axonal and dendritic branches below one mm2 of cortical surface is estimated to be between 1 and 10 km, yielding up to lOll mm total wire length for primary visual cortex. Thus if one assumes that 100 separate circuits are implemented in primary visual cortex, each of them can use 107 neurons and a total wire length of 109 mm. Hence realistic bounds for the complexity of a single one of these circuits for visual pattern recognition are 107 = n7 / 5 neurons (for n = 105 ), and a total wire length of at most 1011.5 = n 2 .3 grid units in the framework of our model. The whole cortex receives sensory input from about 108 neurons. It processes this input with about 1010 neurons and less than 1012 mm total wire length. If one assumes that 103 separate circuits process this sensory information in parallel, each of them processing about l/lOth of the input (where again 10 different local feature detectors report about every location in a map), one anives at n = 106 neurons for each circuit, and each circuit can use on average n 7 /6 neurons and a total wire length of lOll .5 < n 2 grid units in the sense of our model. The actual resources available for sensory processing are likely to be substantially smaller, since most cortical neurons and circuits are believed to have many other functions besides online sensory processing. higher (or lower) intensity than its immediate surrounding, or a high-intensity line segment in a certain direction, the end of a line, a junction of line segments, or even more complex local visual patterns like an eye or a nose. The ultimate computational goal is to detect specific global spatial arrangements of such local patterns, such as the letter "T", or in the end also a human face, in a translation- and scale-invariant manner. We formalize 2-dimensional global pattern detection problems by assuming that the input consists of arrays g = (al, ... , an), ~ = (bl, ... , bn), etc. of binary variables that are arranged on a 2-dimensional square grid4 • Each index i can be thought of as representing a location within some y'ri x y'ri-square in the outside world. We assume that ai = 1 if and only if feature a is detected at location i and that bi = 1 if and only if feature b is detected at location i. In our formal model we can reserve a subsquare within the 2-dimensional grid for each index i, where the input variables ai, bi , etc. are given on adjacent nodes of this grid5 . Since we assume that this spatial arrangement of input variables reflects spatial relations in the outside world, many salient examples for global pattern detection problems require the computation of functions such as { 1, if there exist i and j so that ai = bj = 1 and input location j PI) (g,~) = is above and to the right of input location i 0, else Theorem 2.1 The function PI) can be computed - and witnesses i and j with ai = bj = 1 can be exhibited if they exist - by a circuit with total wire length O(n), consisting ofO(n) Boolean gates offan-in 2 (andfan-out 2) in depth o (log n . log logn). The depth of the circuit can be reduced to o (log n) if one employs threshold gates6 with fan-in logn. This can also be done with total wire length O(n). Proof (sketch) At first sight it seems that PI) needs complete connectivity on the plane because of its global character. However, we show that there exists a divide and conquer approach with rather small communication cost. Divide the input plane into four sub-squares Cl , ... , C4 (see Figure la). We write gl, ... , g4 and ~l , ... , ~4 for the restrictions of the input to these four sub-areas and assume that the following values have already been computed for each sub-square Ci : • The x-coordinate of the leftmost occurrence of feature a in Ci • The x-coordinate of the rightmost occurrence of feature b in Ci • The y-coordinate of the lowest occurrence of feature a in Ci • The y-coordinate of the highest occurrence of feature b in Ci • The value of p;;/4(gi,~i) We employ a merging algorithm that uses this information to compute corresponding values for the whole input plane. The first four values can be computed by comparison-like 4Whenever needed we assume for simplicity that n is such that Vii, log n etc. are natural numbers. The arrangement of the input variables an the grid will in general leave many nodes empty, which can be occupied by gates of the circuit. 5To make this more formal one can assume that indices i and) represent pairs (il' i2), (jl, h) of coordinates. Then "input location) is above and to the right of input location i" means: il < 1t and i2 < )2. The circuit complexity of variations of the function PE where one or both of the "<" are replaced by "~" is the same. 6 A threshold gate computes a Boolean function T : {O, 1} k -+ {O, 1} of the form T(Xl' . . . ,Xk) = 1 ¢:} E~=l WiXi ~ Woo • • • • • • • • • • '--___ "'---__ ____' a) ,- - - - - - - - - r - - - - - - - - -. ....--,.--or - - - - - - , : l ~ : : ~ : ' , ~ 1 ' ~ ~ : ~ i ~ ~ ~ ~ .w. ~ -. -~ : •••••••• ~ ••••••••••••••••• I ~ .L- -~----~ i ~ ~ ~ ____ ~ ____ ____'b) : ~ I .. ~ 1 -~ ~ I ~ ~ -I '--___ .... _______ I c) Figure 1: The 2-dimensional input plane. Occurrences of features in Q are indicated by light squares, and occurrences of features in fl. are indicated by dark squares. Divide the input area into four sub-squares (a). Merging horizontally adjacent sub-squares (b). Merging vertically adjacent sub-squares (c). a) b) c) Figure 2: The H-tree construction. Black squares represent sub-circuits for the merging algorithm. The shaded areas contain the leaves of the tree. The lightly striped areas represent busses of wires that run along the edges of the H-Tree. The H-tree HI divides the input-area into four sub-squares (a). To construct H 2 , replace the leaves of HI by H-trees HI (b). To construct H k , replace the leaves of HI by H-trees H k - I (c). operations. The computation of P]5(Q, fl.) can be sketched as follows: First, check whether p{;/4(Qi,fl.i) = 1 for some i E {I, ... ,4}. Then, check the spatial relationships between feature occurrences in adjacent sub-squares. When checking spatial relationships between features from two horizontally adjacent sub-squares, only the lowest and the highest feature occurrence is crucial for the value of P]5 (see Figure Ib). This is true, since the x-coordinates are already separated. When checking spatial relationships of features from two vertically adjacent sub-squares, only the leftmost and the rightmost feature occurrence is crucial for the value of P]5 (see Figure lc). This is true, since the y-coordinates are already separated. When checking spatial relationships of features from the lower left and the upper right sub-squares, it suffices to check whether there is an a-feature occurrence in the lower left and a b-feature occurrence in the upper right sub-square. Hence, one can reduce the amount of information needed from each sub-square to 0 (log n/ 4) bits. In the remaining part of the proof sketch, we present an efficient layout for a circuit that implements this recursive algorithm. We need a layout strategy that is compatible with the recursive two-dimensional division of the input plane. We adopt for this purpose a well known design strategy: the H-tree (see (Mead and Rem, 1979». An H-tree is a recursive tree-layout on the 2-dimensional plane. Let Hk denote such a tree with 4k leaves. The layout of HI is illustrated in Figure 2a. To construct an H -Tree H k, build an H -tree HI and replace its four leaves by H-trees H k - I (see Figure 2b,c). We need to modify the H-tree construction of Mead and Rum to make it applicable to our problem. The inner nodes of the tree are replaced by sub-circuits that implement the merging algorithm. Furthermore, each edge of the H-tree is replaced by a "bus" consisting of O(log m) wires if it originates in an area with m inputs. It is not difficult to show that this layout uses only linear total wire length. • The linear total wire length of this circuit is up to a constant factor optimal for any circuit whose output depends on all of its n inputs. Note that most connections in this circuit are local, just like in a biological neural circuit. Thus, we see that minimizing total wire length tends to generate biology-like circuit structures. The next theorem shows that one can compute PI) faster (i.e. by a circuit with smaller depth) if one can afford a somewhat larger total wire length. This circuit construction, that is based on AND/OR gates of limited fan-in ~, has the additional advantage that it can not just exhibit some pair (i, j) as witness for PI) (g" Q) = 1 (provided such witness exists), but it can exhibit in addition all j that can be used as witness together with some i. This property allows us to "chain" the global pattern detection problem formalized through the function PI), and to decide within the same complexity bound whether for any fixed number k of input vectors g,(l), ... ,g,(k) from {a, 1}n there exist locations i(l), ... ,i(k) so that ai;:\ = 1 for m = 1, ... ,k and location i(m+1) lies to the right and above location i(m) for m = 1, ... ,k 1. In fact, one can also compute a k-tuple of witnesses i(l), ... ,i(k) within the same complexity bounds, provided it exists. This circuit design is based on an efficient layout for prefix computations. Theorem 2.2 For any given n and ~ E {2, ... ,Vn} one can compute the function PI) in depth O(:~:~) by a feed-forward circuit consisting ofO(n) AND/OR gates offan-in ~ ~, with total wire length O(n . ~ . :~;~). • Another essential ingredient of translation- and scale-invariant global pattern recognition is the capability to detect whether a local feature c occurs in the middle between locations i and j where the local features a and b occur. This global pattern detection problem is formalized through the following function PF : {a, 1 pn -t {a, 1}: If LA = LQ = 1 thenPF(g"Q,~) = 1, if and only if there existi,j,k so that input location k lies on the middle of the line between locations i and j, and ai = bj = Ck = 1. This function PF can be computed very fast by circuits with the least possible total wire length (up to a constant factor), using threshold gates of fan-in up to Vn: Theorem 2.3 The function PF can be computed - and witnesses can be exhibited - by a circuit with total wire length and area O(n), consisting ofO(n) Boolean gates offan-in 2 and 0 (..jn) threshold gates of fan-in Vn in depth 7. The design of the circuit exploits that the computation of PF can be reduced to the solution of two closely related 1-dimensional problems. • 3 Discussion There exists a very large literature on neural circuits for translation-invariant pattern recognition see http://www.cn!.salk.edurwiskottiBibliographies/Invariances.htm!. Unfortunately there exists substantial disagreement regarding the interpretation of existing approaches see http://www.ph.tn.tudelft.nIIPRInfo/shiftimaillist.html. Virtually all positive results are based on computer simulations of small circuits, or on learning algorithms for concrete neural networks with a fixed input size n on the order of 20 or 30, without an analysis how the required number of gates and the area or volume occupied by wires scale up with the input size. The computational performance of these networks is often reported in an anecdotical manner. The goal of this article is to show that circuit complexity theory may become a useful ingredient for understanding the computational strategies of biological neural circuits, and for extracting from them portable principles that can be applied to novel artificial circuits 7 . For that purpose we have introduced the total wire length as an abstract complexity measure that appears to be among the most salient ones in this context, and which can in principle be applied both to neural circuits in the cortex and to artificial circuitry. We would like to argue that only those computational strategies that can be implemented with subquadratic total wire length have a chance to reflect aspects of cortical information processing, and only those with almost linear total wire length are implementable in special purpose VLSIchips for real-world sensory processing tasks.8 The relevance of the total wire length of cortical circuits has been emphasized by numerous neuroscientists, from Cajal (see for example p. 14 in (Cajal, 1995)) to (Chklovskii and Stevens, 2000). On the other hand the total wire length of a circuit layout is also closely related to the area required by a VLSI implementation of such a circuit (see (Savage, 1998)). We have formalized some basic computational problems, that appear to underly various translation- and scale-invariant sensory processing tasks, as a first set of benchmark functions for a circuit complexity theory of sensory processing. We have presented designs for circuits that compute these benchmark functions with small - in most cases linear or almost linear - total wire length (and constant factors of moderate size). The computational strategies of these circuits differ strongly from those that have been considered in previous approaches, which failed to take the limitations imposed by the realistically available amount of total wire length into account. References Abeles, M. (1998). Corticonics: Neural Circuits of the Cerebral Cortex, Cambridge Univ. Press. Braitenberg, V., Schuz, A. (1998). Cortex: Statistics and Geometry of Neuronal Connectivity, 2nd ed., Springer Verlag. Cajal, S.R. (1995). Histology of the Nervous System, volumes 1 and 2, Oxford University Press (New York). Chklovskii, D.B. and Stevens, C.P. (2000). Wiring optimization in the brain. Advances in Neural Information Processing Systems vol. 12, MIT Press, 103-107. Koch, C. (1999). Biophysics of Computation, Oxford Univ. Press. Lazzaro, J., Ryckebusch, S., Mahowald, M. A., Mead, C. A. (1989). Winner-take-all networks of O(n) complexity. Advances in Neural Information Processing Systems, vol. 1, Morgan Kaufmann (San Mateo), 703-711. Mead, C. and Rem M. (1979). Cost and performance of VLSI computing structures. IEEE 1. Solid State Circuits SC-14(1979), 455-462. Mead, C. (1989). Analog VLSI and Neural Systems. Addison-Wesley (Reading, MA, USA). Savage, J. E. (1998). Models of Computation: Exploring the Power of Computing. Addison-Wesley (Reading, MA, USA). Shepherd, G. M. (1998). The Synaptic Organization of the Brain, 2nd ed., Oxford Univ. Press. 7We do not want to argue that learning plays no role in the design and optimization of circuits for specific sensory processing tasks; on the contrary. But one of the few points where the discussion from http://www.ph.tn.tudelft.nUPRInfo/shiftfmaillist.html agreed is that translation- and scaleinvariant pattern recognition is a task which is so demanding, that learning algorithms have to be supported by pre-existing circuit structures. 80f course there are other important complexity measures for circuits - such as energy consumption - besides those that have been addressed in this article.
2000
92
1,897
Bayes Networks on Ice: Robotic Search for Antarctic Meteorites Liam Pedersen-, Dimi Apostolopoulos, Red Whittaker Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 {pedersen+, dalv, red}@ri.cmu.edu Abstract A Bayes network based classifier for distinguishing terrestrial rocks from meteorites is implemented onboard the Nomad robot. Equipped with a camera, spectrometer and eddy current sensor, this robot searched the ice sheets of Antarctica and autonomously made the first robotic identification of a meteorite, in January 2000 at the Elephant Moraine. This paper discusses rock classification from a robotic platform, and describes the system onboard Nomad. 1 Introduction Figure 1 : Human meteorite search with snowmobiles on the Antarctic ice sheets, and on foot in the moraines. Antarctica contains the most fertile meteorite hunting grounds on Earth. The pristine, dry and cold environment ensures that meteorites deposited there are preserved for long periods. Subsequent glacial flow of the ice sheets where they land concentrates them in particular areas. To date, most meteorites recovered throughout history have been done so in Antarctica in the last 20 years. Furthermore, they are less likely to be contaminated by terrestrial compounds . • http://www.cs.cmu.edu/-pedersen Meteorites are of interest to space scientists because, with the exception of the Apollo lunar samples, they are the sole source of extra-terrestrial material and a window on the early evolution of the solar system. The identification of Martian and lunar meteorite samples, and the (controversial) evidence of fossil bacteria in the former underscores the importance of systematically retrieving as many samples as possible. Currently, Antarctic meteorite samples are collected by human searchers, either on foot, or on snowmobiles, who systematically search an area and retrieve samples according to strict protocols. In certain blue ice fields the only rocks visible are meteorites. At other places (moraines - areas where the ice flow brings rocks to the surface) searchers have to contend with many terrestrial rocks (Figure 1). 1.1 Robotic search for Antarctic meteorites color camera reflectance spectrometer Figure 2 : Nomad robot, equipped with scientific instruments, investigates a rock in Antarctica. With the goal of autonomously search for meteorites in Antarctica, Carnegie Mellon University has built and demonstrated [1] a robot, Nomad (Figure 2), capable of long duration missions in harsh environments. Nomad is equipped with a color camera on a pan-tilt platform to survey the ice for rocks and acquire close up images of any candidate objects, and a manipulator arm to place the fiber optic probe of a specially designed visible light reflectance spectrometer over a sample. The manipulator arm can also place other sensors, such a metal detector. The eventual goal, beyond Antarctic meteorite search, is to develop technologies for extended robotic exploration of remote areas, including planetary surfaces. One particular technology is the capacity to carry out autonomous science, including autonomous geology and the ability to recognize a broad range of rock types and note exceptions. Identifying meteorites amongst terrestrial rocks is the fundamental engineering problem of robotic meteorite search and is the topic addressed by the rest of this paper. 2 Bayes network rock and meteorite classifier Classifying rocks from a mobile robotic vehicle entails several unique issues: • The classifier must learn from examples. Human experts often have trouble explaining how they can identifY many rocks, and will refer to an example. In the words of a veteran Antarctic meteorite searcher [2] "First you find a few meteorites, then you know what to look for". A complication is the difficulty of acquiring large sets of training data, under realistic field conditions. To date this has required two earlier expeditions to Antarctica, as well as visits to the Arctic and the Atacama desert in Chile. Therefore, it is necessary to constrain a classifier as much as possible with available prior knowledge, so that training can be accomplished with minimum data. • The classifier must be able to accept incomplete data, and compound evidence for different hypotheses as more information becomes available. The robot has multiple sensors, and there is a cost associated with using each one. Sensors such as the spectrometer are particularly expensive to use because the robot must be maneuvered to bring the rock sample into the sensor manipulator workspace. Therefore, it is desirable that initial classifications be made using data from cheap long range sensors, such as a color camera, before final verification using expensive sensors on promising rock samples. A corollary of this is that the classifier should accept prior evidence from other sources, such as an experts knowledge on what to expect in a particular location. • Rock classes are often ambiguous, and the distinctions between certain types fuzzy at best [3]. The classifier must handle this ambiguity, and indicate several likely hypotheses if a definite classification cannot be achieved. These requirements for a robotic rock classifier argue strongly in favor of a Bayes network based approach, which can satisfy them all. The intuitive graphical structure of a Bayes network makes it easier to encode physical constraints into the network topology, thus reducing the intrinsic dimensionality. Bayesian update is a principled way to compound evidence, and prior information is naturally represented by prior probabilities. Additionally, with a Bayes network it is simple to compute the likelihood of any new data, and thus conceivably recognize bad sensor readings. Furthermore, the network can be queried to estimate the information gain of further sensor readings, enabling active sensor selection. 2.1 Network architecture The (simplified) network architecture for distinguishing rocks from meteorites, using features from sensor data, is shown in Figure 3. It is a compromise between a fully connected network (no constraints whatsoever, and computationally intractable) and a naive Bayes classifier (can be efficiently evaluated, but lacks sufficient representational power). Sensor features are only weakly (conditionally) dependent on each other because of a careful choice of suitable features, and the intermediate node Rock-type, whose states include all possible rock and meteorite types likely to be encountered by the classifier. A complication is that the sensor features are continuous quantities, yet the Bayes network implementation can only handle discrete variables. Therefore the continuous variables need to be suitably quantized. Rock/Meteorite type Iron meteorite Sandstone Meteorite? True False Figure 3 : Bayes network for discriminating meteorites and rocks based on features computed from sensor data. 2.2 Sensors and feature vectors • u <: ~ • 0.5 -e • ~ .. 0 ! 400 600 peak ,....----, strengt of peak (+) or trough (-) at given wavelength 800 1000 wavelength /[nm] Figure 4 : Example spectrum (with extracted features) and color images of rocks on ice. One of the rocks in the image is meteorite. In Antarctica Nomad acquired reflectance spectra and color images (Figure 4) of sample rocks. Spectra are obtained by shining white light on the sample and analyzing the reflected light to determine the fraction of light reflected at a series of wavelengths. The relevant features in a spectrum, for the purpose of identifying rocks, are the presence, location and size of peaks and troughs in the spectrum (Figure 4), and the average magnitude (albedo) of the spectrum over certain wavelengths. Spectral troughs and peaks are detected by computing the correlation of the spectrum with a set of 10 templates over a finite region of support (50 nm). Restricting the degree of overlap between templates minimizes statistical dependencies between the resulting spectral features (Figure 3). Normalizing the correlation coefficients makes them (conditionally) independent of the average spectral intensity and robust to changes to scale (important, because in practice, when making a field measurement of a spectrum it is difficult to accurately determine the scale). A 13 element real valued feature vector (each component corresponding to a sensor feature node in Figure 3) is thus obtained from the original 1000+ element spectrum. Color images are harder to interpret (one of the rocks in Figure 4 is a meteorite). First the rock needs to be segmented from the background of snow and ice in the image, using a partially observable Markov model [4]. Features of interest are the rock cross sectional area (used as a proxy for size, and requiring that the scaling of the images be known), average color, and simple texture and shape metrics [4]. Meteorites tend to be small and dark compared to terrestrial rocks. An 8 element real valued feature vector is computed from each image. All real valued features are quantized prior to being entered into the Bayes network, which cannot handle continuous quantities. 2.3 Network training The conditional probability matrices (CPM's) describing the probability distributions of network sensor feature nodes given Rock type (and other parent nodes) are learned from examples (of rock types along with the associated feature vectors derived from sensor readings on rock samples of the given type) using the algorithm in [5]. If X is a node (with N states) with parent Y, and with CPM Pij = P(X=iIY=j), then each column is represented by a Dirichlet distribution (initially uniform) and assumed independent of the others. If (X) " (XN are the Dirichlet parameters for P(XIY=j) then lij =a;{fPk [6]. Given a new example {X=i,Y=j}with weight w the Dirichlet parameters are updated: (Xi 7 (Xi + w. This is a true Bayesian learning algorithm, and is stable. Furthermore, it is possible to weight each training sample to reflect its frequency of occurrence for the rock type that generated it. This is especially important if multiple sensor readings are taken from a single sample CD .... !!! c o :;: '2: CI o u I!! 1 o -~-2 .... . .. 1 ............. t ............ l ............. l~ .... 1. ........... t ............ l ............ t ............. i ........... . ii i ..... i ii i i i , , ~ , , , , , , .• ······t ·········· ···i·~···· ·r· ·········· · ·t······ ·······~·· ·········· ; ············i·············i··············j············ •......... ~ ...... .".. ~ ............. ~ ............. ~ . . . . . . . . . . . ~ ............ ~ .............. ! ............. ~ .............. ~ ........... . ·· .. ··,r .. ··· .. ········ .. ··· .. ·· .. ··· .. ···J···· .. ··· .. ·I· .. ··· .. · ... .j.. ........... .\. ............ j .............. j ........... . .. r .. t ............ ·I ............ i .......... ·l ............ t .......... • • • •• spectrum .. . r ...... t ............ i ............ t ............ ·t ............ l ...... · .. - Image .......... ' ............ I ............ ·t ............ ·t ............ y........ ::~~ors false positives 1 Figure 5 : Classifier rate of classification curves using laboratory data for training and testing (25% cross validation), for different sensors. The training data (gathered from previous Antarctic expeditions, and from US laboratory collections· of meteorites and Antarctic rocks) is insufficient to fully populate the (quantized) space on which the CPM's are defmed, unless the real valued feature nodes are very coarsely quantized. To avoid this, more spectral data was generated from each sample spectra by adding random noise (generated by a • Johnson Space Center, Houston and Ohio State University, Columbus. non-linear spectrometer noise model) to it. (This is analogous to the approach used by [7] for training neural networks). Using meteorite and terrestrial rock data acquired in the lab, partitioned into 75% training, 25% testing cross validation sets, the Rate of Classification (ROC) curves in Figure 5 are generated. Note the superior classification with spectra versus classification with color images only. In fact, given a spectrum, a color image does not improve classification. However, because it is easier to acquire color images than spectra, they are still useful as a sensor for preliminary screening. 3 Antarctica 2000 field results In January 2000 the Nomad robot was deployed to the Elephant moraine in Antarctica for robotic meteorite searching trials. Nomad searched areas known to contain meteorites, autonomously acquiring color images and reflection spectra of both native terrestrial rocks and meteorites, and classifying them. On January 22, 2000 Nomad successfully identified a meteorite amongst terrestrial rocks on the ice sheet (http://www. frc.ri.cmu.edulproj ects/meteorobot2000/). c o 1 ~ 0.5 C) o u f : : : : . . . . : ·············i···· ."" ... ~ .• ~ ........ ' ·············~············ ···i··· ············~ ~ .~ : : : : •••• .(i) a prio.ri ;;~ : :·· jlrrI -· :::1) ~=:~.d o 0 0.5 1 false positive rate Figure 6: Rate of classification curves for the Nomad robot searching for meteorites in Antarctica, 2000 A.D. Overall performance (using spectra only, due to a problem that developed with camera zoom control) is indicated by the ROC performance curves in Figure 6. These were generated from a test set of rocks and meteorites (40 and 4 samples respectively, with multiple readings of each) in a particular area of the moraine. Figure 6(i) is using the a priori classifier built from the lab data (used to generate Figure 5), acquired prior to arrival in Antarctica. Performance clearly does not match that in Figure 5. There is a notable improvement in (ii), the ROC curve for the same classifier further trained with field data acquired by the robot in the area (from 8 rocks and 2 meteorites not in this test set). Even with retraining, classification is systematically bad for a particular class of rocks (hydro-thermally altered dolerites and basalts) that occurred in the Elephant moraine. These rocks are stained red with iron oxide (rust) whose spectrum has a very prominent peak at 900 nm, precisely where many meteorite spectra also have a peak. This is not surprising, given that most meteorites contain metallic iron, and therefore can have rust on the surface. However, these rocks were absent from the initial training set and not initially expected in this area. Performance is much better if these rocks are removed from the test set (iii) and the retrained classifier is used. 4 Conclusions With the caveat that training be continued using data acquired by the robot in the field, the Bayes network approach to robotic rock classification is a viable approach to this task. Nomad did autonomously identify several meteorites. However, in areas with hydro-thermally altered rocks (iron-oxide stained) the reflection spectrometer must be supplemented by other sensors, such as metal detectors, magnetometers or more exotic spectrometers (thermal emission or Raman), obviously at greater cost. Sensor noise and systematic effects due to autonomous robot placement of sensors on samples in the unstructured and uncontrolled polar environment are significant. They are hard to know a priori and need to be learned from data acquired by the robot, and in field conditions, as demonstrated by the significant improvement in classification achieved after field retraining. Further work needs to be done in selective sensor selection, active modeling of the local geographical distribution of rocks, and recognizing bad sensor readings, but indications are that this can be done in a principled way with the Bayes network classifier and will be addressed in future papers. Acknowledgments The authors gratefully acknowledge the invaluable assistance of Professor William Cassidy of the University of Pittsburgh, Professor Gunter Faure of Ohio State University, Marilyn Lindstrom and the staff at the Antarctic meteorite curation facility of NASA's Johnson Space Center, and Drs. Martial Hebert and Andrew Moore of Carnegie Mellon University. This work was funded by NASA, and supported in Antarctica by the National Science Foundation's Office of Polar Programs. References [1] D. Apostolopoulos, M. Wagner, W. Whittaker, "Technology and Field Demonstration Results in the Robotic Search for Antarctic Meteorites", Field and Service Robotics Conference, Pittsburgh, USA, 1999 [2] Cassidy, William, University of Pittsburgh Department of Geology, personal communication, 1997. [3] R. Dietrich and B. Skinner, Rocks and Minerals, Wiley 1979. [4] L. Pedersen, D. Apostolopoulos, W. Whittaker, T. Roush, G. Benedix, "Sensing and Data Classification for Robotic Meteorite Search", Proceedings of SPIE Photonics East Conference, Boston, 1998. [5] SpiegelhaJter, David I., A. Philip Dawid, Steffen L. Lauritzen and Robert G. Cowell, "Bayesian analysis in expert systems" in Statistical Science, 8(3), p219-283., 1993. [6] A. Gelman, I. Carlin, H. Stem, D. Rubin, Bayesian Data AnalYSiS, Chapman & Hall, 1995. [7] D. Pomerleau, "Efficient Training of Artificial Neural Networks for Autonomous Navigation", NeurComp vol. 3 no. 1 p 88-97, 1991
2000
93
1,898
Learning and Tracking Cyclic Human Motion D.Ormoneit Dept. of Computer Science Stanford University Stanford, CA 94305 ormoneitOcs.stanford.edu M. J. Black Dept. of Computer Science Brown University, Box 1910 Providence, RI 02912 blackOcs.brown.edu H. Sidenbladh Royal Institute of Technology (KTH), CVAP/NADA, S-100 44 Stockholm, Sweden hedvigOnada.kth.se T. Hastie Dept. of Statistics Stanford University Stanford, CA 94305 hastieOstat.stanford.edu Abstract We present methods for learning and tracking human motion in video. We estimate a statistical model of typical activities from a large set of 3D periodic human motion data by segmenting these data automatically into "cycles". Then the mean and the principal components of the cycles are computed using a new algorithm that accounts for missing information and enforces smooth transitions between cycles. The learned temporal model provides a prior probability distribution over human motions that can be used in a Bayesian framework for tracking human subjects in complex monocular video sequences and recovering their 3D motion. 1 Introduction The modeling and tracking of human motion in video is important for problems as varied as animation, video database search, sports medicine, and human-computer interaction. Technically, the human body can be approximated by a collection of articulated limbs and its motion can be thought of as a collection of time-series describing the joint angles as they evolve over time. A key challenge in modeling these joint angles involves decomposing the time-series into suitable temporal primitives. For example, in the case of repetitive human motion such as walking, motion sequences decompose naturally into a sequence of "motion cycles" . In this work, we present a new set of tools that carry out this segmentation automatically using the signal-to-noise ratio of the data in an aligned reference domain. This procedure allows us to use the mean and the principal components of the individual cycles in the reference domain as a statistical modeL Technical difficulties include missing information in the motion time-series (resulting from occlusions) and the necessity of enforcing smooth transitions between different cycles. To deal with these problems, we develop a new iterative method for functional Principal Component Analysis (PCA). The learned temporal model provides a prior probability distribution over human motions that can be used in a Bayesian framework for tracking. The details of this tracking framework are described in [7] and are briefly summarized here. Specifically, the posterior distribution of the unknown motion parameters is represented using a discrete set of samples and is propagated over time using particle filtering [3, 7]. Here the prior distribution based on the PCA representation improves the efficiency of the particle filter by constraining the samples to the most likely regions of the parameter space. The resulting algorithm is able to track human subjects in monocular video sequences and to recover their 3D motion under changes in their pose and against complex unknown backgrounds. Previous work on modeling human motion has focused on the recognition of activities using Hidden Markov Models (HMM's), linear dynamical models, or vector quantization (see [7, 5] for a summary of related work). These approaches typically provide a coarse approximation to the underlying motion. Alternatively, explicit temporal curves corresponding to joint motion may be derived from biometric studies or learned from 3D motion-capture data. In previous work on principal component analysis of motion data, the 3D motion curves corresponding to particular activities had typically to be hand-segmented and aligned [1, 7, 8]. By contrast, this paper details an automated method for segmenting the data into individual activities, aligning activities from different examples, modeling the statistical variation in the data, dealing with missing data, enforcing smooth transitions between cycles, and deriving a probabilistic model suitable for a Bayesian interpretation. We focus here on cyclic motions which are a particularly simple but important class of human activities [6]. While Bayesian methods for tracking 3D human motion have been suggested previously [2, 4], the prior information obtained from the functional PCA proves particularly effective for determining a low-dimensional representation of the possible human body positions [8, 7]. 2 Learning Training data is provided by a commercial motion capture system describes the evolution of m = 19 relative joint angles over a period of about 500 to 5000 frames. We refer to the resulting multivariate time-series as a "motion sequence" and we use the notation Zi (t) == {Za ,i (t) la = 1, ... , m} for t = 1, ... ,T; to denote the angle measurements. Here T; denotes the length of sequence i and a = 1, ... , m is the index for the individual angles. Altogether, there are n = 20 motion sequences in our training set. Note that missing observations occur frequently as body markers are often occluded during motion capture. An associated set Ia,i == {t E {I, ... , T;} I za,;(t) is not missing} indicates the positions of valid data. 2.1 Sequence Alignment Periodic motion is composed of repetitive "cycles" which constitute a natural unit of statistical modeling and which must be identified in the training data prior to building a model. To avoid error-prone manual segmentation we present alignment procedures that segment the data automatically by separately estimating the cycle length and a relative offset parameter for each sequence. The cycle length is computed by searching for the value p that maximizes the "signal-to-noise ratio": . _ " signali,a (p) stn_ratzo;(p) = ~ . () , a nozse;,a p (1) IMteoOl Slgnal-tCH'IOl &O ~:E;¥3~~~=-~ m IIII~ 1III11~lllil 1.llilillllli~ IIII 1 i ~~t : ..... : : : 3' i}= ~~~t : .1 : : : 3' ~ J #?,8 ; .. ~e ~O; 'ft; ~ ~t : ~ : : : r i;~ ~~~t : ... : : : r !~t : : .... : : : J' !; r ~ ! ~~t : : .1 : : : r E:= 1] :'" , ....&.. , , ::: J' -2 l ~~t : : .1 : : J' ~;~ ~ ~~ i OOOt : : ..l : : r ~ :~ ; 50::0 ~ '00 , ~ ~" Figure 1: Left: Signal-to-noise ratio of a representative set of angles as a function of the candidate period length. Right: Aligned representation of eight walking sequences. where noisei,a (p) is the variation in the data that is not explained by the mean cycle, z, and signal;,a (P) measures the signal intensity. 1 In Figure 1 we show the individual signal-to-noise ratios for a subset of the angles as well as the accumulated signal-to-noise ratio as functions of p in the range {50, 51, ... , 250}. Note the peak of these values around the optimal cycle length p = 126. Note also that the signalto-noise ratio of the white noise series in the first row is approximately constant, warranting the unbiasedness of our approach. Next, we estimate the offset parameters, 0, to align multiple motion sequences in a common domain. Specifically, we choose 0(1) ,0(2) , ... , o(n) so that the shifted motion sequences minimize the deviation from a common prototype model by analogy to the signal-to-noise-criterion (1). An exhaustive search for the optimal offset combination is computationally infeasible. Instead, we suggest the following iterative procedure: We initialize the offset values to zero in Step 1, and we define a reference signal ra in Step 2 so as to minimize the deviation with respect to the aligned data. This reference signal is a periodically constrained regression spline that ensures smooth transitions at the boundaries between cycles. Next, we choose the offsets of all sequences so that they minimize the prediction error with respect to the reference signal (Step 3). By contrast to the exhaustive search, this operation requires 00:=7=1 p(i)) comparisons. Because the solution of the first iteration may be suboptimal, we construct an improved reference signal using the current offset estimates, and use this signal in turn to improve the offset estimates. Repeating these steps, we obtain an iterative optimization algorithm that is terminated if the improvement falls below a given threshold. Because Steps 2 and 3 both decrease the prediction error, so that the algorithm converges monotonically. Figure 1 (right) shows eight joint angles of a walking motion, aligned using this procedure. 2.2 Functional peA The above alignment procedures segment the training data into a collection of cycle-data called "slices". Next, we compute the principal components of these slices, which can be interpreted as the major sources of variation in the data. The algorithm is as follows lThe mean cycle is obtained by "folding" the original sequence into the domain {I, . .. ,p}. For brevity, we don't provide formal definitions here; see [5]. 1. For a = 1, ... , m and i = 1, ... , n: (a) Dissect Zi,a into K i cycles of length p(i), marlcing missing values at both ends. This gives a new set of time series Z~l ) for k = 1, ... , K i where K i = I T';(~f ) 1 + 1. Let h,a be the new index ~:t for this series. (b) Compute functional estimates in the domain [0,1]. (c) Resample the data in the reference domain, imputing missing observations. This gives yet another time-series zk~~ (j) := ik ,a ( 1=) for j = 0,1, ... , T. 2. Stack the "slices" zk2 ) obtained from all sequences row-wise into a 2::. Ki X mT design matrix X. ,a • 3. Compute the row-mean /1. of X, and let X(1) := X - l'p. 1 is a vector of ones. 4. Slice by slice, compute the Fourier coefficients of X(1), and store them in a new matrix, X(2). Use the first 20 coefficients only. 5. Compute the Singular Value Decomposition of X(2): X(2) = USV'. 6. Reconstruct X(2), using the rank q approximation to S: X(3) = usqv'. 7. Apply the Inverse Fourier Transform and add I' p to obtain X(4). 8. Impute the missing values in X using the corresponding values in X(4). 9. Evaluate IIX X(4) II. Stop, if the performance improvement is below 10-6 . Otherwise, goto Step 3. Our algorithm addresses several difficulties. First, even though the individual motion sequences are aligned in Figure I , they are still sampled at different frequencies in the reference domain due to the different alignment parameters. This problem is accommodated in Step lc by resampling after computing a functional estimate in continuous time in Step lb. Second, missing data in the design matrix X means we cannot simply use the Singular Value Decomposition (SVD) of X(l) to obtain the principal components. Instead we use an iterative approximation scheme [9] in which we alternate between an SVD step (4 through 7) and a data imputation step (8) , where each update is designed so as to decrease the matrix distance between X and its reconstruction, X(4 ) . Finally, we need to ensure that the mean estimates and the principal components produce a smooth motion when recombined into a new sequence. Specifically, the approximation of an individual cycle must be periodic in the sense that its first two derivatives match at the left and the right endpoint. This is achieved by translating the cycles into a Fourier domain and by truncating highfrequency coefficients (Step 4). Then we compute the SVD in the Fourier domain in Step 5, and we reconstruct the design matrix using a rank-q approximation in Steps 6 and 7, respectively. In Step 8 we use the reconstructed values as improved estimates for the missing data in X, and then we repeat Steps 4 through 7 using these improved estimates. This iterative process is continued until the performance improvement falls below a given threshold. As its output, the algorithm generates the imputed design matrix, X, as well as its principal components. 3 Bayesian Tracking In tracking, our goal is to calculate the posterior probability distribution over 3D human poses given a sequence of image measurements, It. The high dimensionality of the body model makes this calculation computationally demanding. Hence, we use the learned model above to constrain the body motions to valid walking motions. Towards that end, we use the SVD of X(2) to formulate a prior distribution for Bayesian tracking. Formally, let O(t) == (Oa(t)la = 1, ... ,m) be a random vector of the relative joint angles at time t; i.e., the value of a motion sequence, Zi(t), at time t is interpreted as the i-th realization of O(t). Then O(t) can be written in the form q O(t) = ji(1/!t) + L Ct,kVk(1/!t) , (2) k=l where Vk is the Fourier inverse of the k-th column of V, rearranged as an T X mmatrix; similarly, j1, denotes the rearranged mean vector J.L. Vk (1/!) is the 1/!-th column of Vk, and the Ct,k are time-varying coefficients. 1/!t E {O, T -I} maps absolute time onto relative cycle positions or phases, and Pt denotes the speed of the motion such that 1/!t+l = (1/!t + pt) mod T Given representation (2), body positions are characterized entirely by the low-dimensional state-vector cPt = (Ct, 1/!t, Pt, -ri, Oi)" where Ct = (Ct,l, ... , Ct,q) and where -ri and 0i represent the global 3D translation and rotation of the torso, respectively. Hence we the problem is to calculate the posterior distribution of cPt given images up to time t. Due to the Markovian structure underlying cPt, this posterior distribution is given recursively by: (3) Here p(It I cPt) is the likelihood of observing the image It given the parameters and P(cPt-l I It-I) is the posterior probability from the previous instant. p(cPt I cPt-d is a temporal prior probability distribution that encodes how the parameters cPt change over time. The elements of the Bayesian approach are summarized below; for details the reader is referred to [7]. Generative Image Model. Let M(It, cPt) be a function that takes image texture at time t and, given the model parameters, maps it onto the surfaces of the 3D model using the camera model. Similarly, let M-1 (-) take a 3D model and project its texture back into the image. Given these functions, the generative model of images at time t + 1 can be viewed as a mapping from the image at time t to images at time t + 1: It+1 = M-l(M(It, cPt) , cPt+l) + 17, 17 ~ G(O, 0") , where G(O, 0") denotes a Gaussian distribution with zero mean and standard deviation 0" and 0" depends on the viewing angle of the limb with respect to the camera and increases as the limb is viewed more obliquely (see [7] for details) . Temporal Prior. The temporal prior, p(cPt I cPt-d, models how the parameters describing the body configuration are expected to vary over time. The individual components of cP, (Ct, 1/!t, Pt , -ri, on, are assumed to follow a random walk with Gaussian increments. Likelihood Model. Given the generative model above we can compare the image at time t 1 to the image It at t. Specifically, we compute this likelihood term separately for each limb. To avoid numerical integration over image regions, we generate ns pixel locations stochastically. Denoting the ith sample for limb j as Xj ,i, we obtain the following measure of discrepancy: n E == L(It(xj,i) - M-1(M(It_1, cPt-I), cPt)(Xj,i))2. (4) i =l As an approximate likelihood term we use p(ItlcPt) = II ~Ctj) exp(-E/(2u(Ctj)2ns)) + (1- q(Ctj))Poccluded, (5) . 21r0"(Ctj) J Figure 2: Tracking of person walking, 10000 samples. Upper rows: frames 0, 10, 20, 30, 40, 50 with the projection of the expected model configuration overlaid. Lower row: expected 3D configuration in the same frames. where Poccluded is a constant probability that a limb is occluded, aj is the angle between the limb j principal axis and the image plane of the camera, 0"( a j) is a function that increases with narrow viewing angles, and q(aj) = cos(aj) if limb j is non-occluded, or 0 if limb j is occluded. Partical Filter. As it is typical for tracking problems, the posterior distribution may well be multi-modal due to the nonlinearity of the likelihood function. Hence, we use a particle filter for inference where the posterior is represented as a weighted set of state samples, ¢;, which are propagated in time. In detail, we use N. ~ 104 particles in our experiments. Details of this algorithm can be found in [3, 7]. 4 Experiment To illustrate the method we show an example of tracking a walking person in a cluttered scene in Figure 2. The 3D motion is recovered from a monocular sequence using only the motion between frames. To visualize the posterior distribution we display the projection of the 3D model corresponding to the expected value of the model parameters: ~, ~~1 Pi¢; where P; is the likelihood of sample ¢;. All parameters were initialized manually with a Gaussian prior at time t = O. The learned model is able to generalize to the subject in the sequence who was not part of the training set. 5 Conclusions We described an automated method for learning periodic human motions from training data using statistical methods for detecting the length of the periods in the data, segmenting it into cycles, and optimally aligning the cycles. We also presented a PCA method for building a statistical eigen-model of the motion curves that copes with missing data and enforces smoothness between the beginning and ending of a motion cycle. The learned eigen-curves are used as a prior probability distribution in a Bayesian tracking framework. Tracking in monocular image sequences was performed using a particle filtering technique and results were shown for a cluttered Image sequence. Acknowledgements. We thank M. Gleicher for generously providing the 3D motion-capture data and M. Kamvysselis and D. Fleet for many discussions on human motion and Bayesian estimation. Portions of this work were supported by the Xerox Corporation and we gratefully acknowledge their support. References [1] A. Bobick and J. Davis. An appearance-based representation of action. ICPR, 1996. [2] T-J. Cham and J. Rehg. A multiple hypothesis approach to figure tracking. CVPR, pp. 239- 245, 1999. [3] M. Isard and A. Blake. Contour tracking by stochastic propagation of conditional density. ECCV, pp. 343-356, 1996. [4] M. E. Leventon and W. T. Freeman. Bayesian estimation of 3-d human motion from an image sequence. Tech. Report TR-98-06, Mitsubishi Electric Research Lab, 1998. [5] D. Ormoneit, H. Sidenbladh, M. Black, T. Hastie, Learning and tracking human motion using functional analysis, submitted: IEEE Workshop on Human Modeling, Analysis and Synthesis, 2000. [6] S.M. Seitz and C.R. Dyer. Affine invariant detection of periodic motion. CVPR, pp. 970-975, 1994. [7] H. Sidenbladh, M. J. Black, and D. J. Fleet. Stochastic tracking of 3D human figures using 2D image motion. to appear, ECCV-2000, Dublin Ireland. [8] Y. Yacoob and M. Black. Parameterized modeling and recognition of activities in temporal surfaces. CVIU, 73(2):232-247, 1999. [9] G. Sherlock, M. Eisen, O. Alter, D. Botstein, P. Brown, T. Hastie, and R. Tibshirani. "Imputing missing data for gene expression arrays," 2000, Working Paper, Department of Statistics, Stanford University.
2000
94
1,899
The Use of MDL to Select among Computational Models of Cognition In J. Myung, Mark A. Pitt & Shaobo Zhang Vijay Balasubramanian Department of Psychology David Rittenhouse Laboratories Ohio State University University of Pennsylvania Columbus, OH 43210 Philadelphia, PA 19103 {myung.l, pitt.2}@osu.edu vijay@endiv.hep.upenn.edu Abstract How should we decide among competing explanations of a cognitive process given limited observations? The problem of model selection is at the heart of progress in cognitive science. In this paper, Minimum Description Length (MDL) is introduced as a method for selecting among computational models of cognition. We also show that differential geometry provides an intuitive understanding of what drives model selection in MDL. Finally, adequacy of MDL is demonstrated in two areas of cognitive modeling. 1 Model Selection and Model Complexity The development and testing of computational models of cognitive processing are a central focus in cognitive science. A model embodies a solution to a problem whose adequacy is evaluated by its ability to mimic behavior by capturing the regularities underlying observed data. This enterprise of model selection is challenging because of the competing goals that must be satisfied. Traditionally, computational models of cognition have been compared using one of many goodness-of-fit measures. However, use of such a measure can result in the choice of a model that over-fits the data, one that captures idiosyncracies in the particular data set (i.e., noise) over and above the underlying regularities of interest. Such models are considered complex, in that the inherent flexibility in the model enables it to fit diverse patterns of data. As a group, they can be characterized as having many parameters that are combined in a highly nonlinear fashion in the model equation. They do not assume a single structure in the data. Rather, the model contains multiple structures; each obtained by finely tuning the parameter values of the model, and thus can fit a wide range of data patterns. In contrast, simple models, frequently with few parameters, assume a specific structure in the data, which will manifest itself as a narrow range of similar data patterns. Only when one of these patterns occurs will the model fit the data well. The problem of over-fitting data due to model complexity suggests that the goal of model selection should instead be to select the model that generalizes best to all data samples that arise from the same underlying regularity, thus capturing only the regularity, not the noise. To achieve this goal, the selection method must be sensitive to the complexity of a model. There are at least two independent dimensions of model complexity. They are the number of free parameters of a model and its functional form, which refers to the way the parameters are combined in the model equation. For instance, it seems unlikely that two one-parameter models, y = ex and y = x9, are equally complex in their ability to fit data. The two dimensions of model complexity (number of parameters and functional form) and their interplay can improve a model's fit to the data, without necessarily improving generalizability. The trademark of a good model selection procedure, then, is its ability to satisfy two opposing goals. A model must be sufficiently complex to describe the data sample accurately, but without over-fitting the data and thus losing generalizability. To achieve this end, we need a theoretically well-justified measure of model complexity that takes into account the number of parameters and the functional form of a model. In this paper, we introduce Minimum Description Length (MDL) as an appropriate method of selecting among mathematical models of cognition. We also show that MDL has an elegant geometric interpretation that provides a clear, intuitive understanding of the meaning of complexity in MDL. Finally, application examples of MDL are presented in two areas of cognitive modeling. 1.1 Minimum Description Length The central thesis of model selection is the estimation of a model's generalizability. One approach to assessing generalizability is the Minimum Description Length (MDL) principle [1]. It provides a theoretically well-grounded measure of complexity that is sensitive to both dimensions of complexity and also lends itself to intuitive, geometric interpretations. MDL was developed within algorithmic coding theory to choose the model that permits the greatest compression of data. A model family f with parameters e assigns the likelihood f(yle) to a given set of observed data y . The full form of the MDL measure for such a model family is given below. MDL = -In! (yISA) + ~ln( ; ) + In f dS.jdetl(S) where SA is the parameter that maximizes the likelihood, k is the number of parameters in the model, N is the sample size and I(e) is the Fisher information matrix. MDL is the length in bits of the shortest possible code that describes the data with the help of a model. In the context of cognitive modeling, the model that minimizes MDL uncovers the greatest amount of regularity (i.e., knowledge) underlying the data and therefore should be selected. The first, maximized log likelihood term is the lack-of-fit measure, and the second and third terms constitute the intrinsic complexity of the model. In particular, the third term captures the effects of complexity due to functional form, reflected through I(e). We will call the latter two terms together the geometric complexity of the model, for reasons that will become clear in the remainder of this paper. MDL arises as a finite series of terms in an asymptotic expansion of the Bayesian posterior probability of a model given the data for a special form of the parameter prior density [2]. Hence in essence, minimization of MDL is equivalent to maximization of the Bayesian posterior probability. In this paper we present a geometric interpretation of MDL, as well as Bayesian model selection [3], that provides an elegant and intuitive framework for understanding model complexity, a central concept in model selection. 2 Differential Geometric Interpretation of MDL From a geometric perspective, a parametric model family of probability distributions forms a Riemannian manifold embedded in the space of all probability distributions [4]. Every distribution is a point in this space, and the collection of points created by varying the parameters of the model gives rise to a hyper-surface in which "similar" distributions are mapped to "nearby" points. The infinitesimal distance between points separated by the infinitesimal parameter differences de; is given by ds 2 = Y' k. g .. (8 )d8 ; d8 j where g ij(e) is the Riemannian metric tensor. The I.... l,j= l lJ Fisher information, lij(e), is the natural metric on a manifold of distributions in the context of statistical inference [4]. We argue that the MDL measure of model fitness has an attractive interpretation in such a geometric context. The first term in MDL estimates the accuracy of the model since the likelihood f(yI8 A ) measures the ability of the model to fit the observed data. The second and third terms are supposed to penalize model complexity; we will show that they have interesting geometric interpretations. Given the metric gij = lij on the space of parameters, the infinitesimal volume element on the parameter manifold is dV = d8 .Jdetl (8) == rt=l d8 i .Jdetl (8) . The Riemannian volume of the parameter manifold is obtained by integrating dV over the space of parameters: VM = f dV = f dS..jdetl(S) In other words, the third term in MDL penalizes models that occupy a large volume in the space of distributions. In fact, the volume measure VM is related to the number of "distinguishable" probability distributions indexed by the model M.l Because of the way the model family is embedded in the space of distributions, two different parameter values can index very similar distributions. If complexity is related to volumes occupied by model manifolds, the measure of volume should count only different, or distinguishable, distributions, and not the artificial coordinate volume. It is shown in [2,5] that the volume VM achieves this goal? While the third term in MDL measures the total volume of distributions a model can describe, the second term relates to the number of model distributions that lie close to the truth. To see this, taking a Bayesian perspective on model selection is helpful. U sing Bayes rule, the probability that the truth lies in the family f given the observed data y can be written as: Pr(fly) = A(f,y){ dB w(S)Pr(yI9) Here wee) is the prior probability of the parameter e, and A(f, y) = Pr(f)/Pr(y) is the ratio of the prior probabilities of the family f and data y. Bayesian methods assume that the latter are the same for all models under consideration and analyze the socalled Bayesian posterior Pf = I de w(9)Pr(yI9)' Lacking prior knowledge, w should be chosen to weight all distinguishable distributions in the family equally. Hence, wee) = lIVM . For large sample sizes, the likelihood function f(yI8 A ) localizes under general conditions to a multivariate 1 Roughly speaking, two probability distributions are considered indistinguishable if one is mistaken for the other even in the presence of an infinite amount of data. A careful definition of distinguishability involves use of the Kullback-Leibler distance between two probability distributions. For further details, see [3,4]. 2 Note that the parameters of the model are always assumed to be cut off in a manner to ensure that VM is finite. Gaussian centered at the maximum likelihood parameter e' (see [3,4] and citations therein). In this limit, the integral for Pj can be explicitly carried out. Performing the integral and taking a log given the result - In Pf = -lnf(yIS') + In(V M / CM )+ 0(1/ N) where C M = (21t / N)k /2 h(S') where h(e') is a data-dependent factor that goes to 1 for large N when the truth lies withinf (see [3,4] for details). CM is essentially the volume of an ellipsoidal region around the Gaussian peak at f(yle') where the integrand of the Bayesian posterior makes a substantial contribution. In effect, CM measures the number of distinguishable distributions within f that lie close to the truth. Using the expressions for CM and VM , the MDL selection criterion can be written as MDL = - In f (yle') + In(V M / C M) + terms sub leading in N (The subleading terms include the contribution of h(e'); see [3,4] regarding its role in Bayesian inference.) The geometric meaning of the complexity penalty in MDL now becomes clear; models which occupy a relatively large volume distant from the truth are penalized. Models that contain a relatively large fraction of distributions lying close to the truth are preferred. Therefore, we refer to the last two terms in MDL as geometric complexity. It is also illuminating to collect terms in MD as MDL = -In[ ( f (yle') ): = -In('' normalized maximized likelihood") VM /C M Written this way, MDL selects the model that gives the highest value of the maximum likelihood, per the relative ratio of distinguishable distributions (VMICM). From this perspective, a better model is simply one with many distinguishable distributions close to the truth, but few distinguishable distributions overall. 3 Application Examples Geometric complexity and MDL constitute a powerful pair of model evaluation tools. When used together in model testing, a deeper understanding of the relationship between models can be gained. The first measure enables one to assess the relative complexities of the set of models under consideration. The second builds on the first by suggesting which model is preferable given the data in hand. The following simulations demonstrate the application of these methods in two areas of cognitive modeling: information integration, and categorization. In each example, two competing models were fitted to artificial data sets generated by each model. Of interest is the ability of a selection method to recover the model that generated the data. MDL is compared with two other selection methods, both of which consider the number of parameters only. They are the Akaike Information Criterion (AIC; [6]) and the Bayesian Information Criterion (BIC; [7]) defined as: AlC= -2 In !CyIS')+ 2k; HIC= -21n!CyIS')+ klnN. 3.1 Information Integration In a typical information integration experiment, a range of stimuli are generated from a factorial manipulation of two or more stimulus dimensions (e.g." visual and auditory) and then presented to participants for categorization as one of two or more possible response alternatives. Data are scored as the proportion of responses in one category across the various combinations of stimulus dimensions. For this comparison, we consider two models of information integration, the Fuzzy Logical Model of Perception (FLMP; [8]) and the Linear Integration Model (LIM; [9]). Each assumes that the response probability (Pij) of one category, say A, upon the presentation of a stimulus of the specific i and j feature dimensions in a two-factor information integration experiment takes the following form: 8)"j 8j + Aj FLMP: Pjj = 8jAj + (1- 8)(1- Aj ); LIM: Pjj = -2where ei and ej (i=I, .. ,q]; j=I, .. ,q2· 0 < ei, ej < I) are parameters representing the corresponding feature dimensions. The simulation results are shown in Table 1. When the data were generated by FLMP, regardless of the selection method used, FLMP was recovered 100% of the time. This was true across all selection methods and across both sample sizes, except for MDL when sample size was 20. In this case, MDL did not perform quite as well as the other selection methods. When the data were generated by LIM, AIC or BIC fared much more poorly whereas MDL recovered the correct model (LIM) across both sample sizes. Specifically, under AIC or BIC, FLMP was selected over LIM half of the time for N = 20 (51 % vs. 49%), though such errors were reduced for N = 150 (17% vs 83%). T bl 1 M d I R a e o e ecovery R ates or wo n ormatIOn ntegratIOn f T I f I M d I o e s Sample Selection Data from: FLMP LIM Size Method Model fitted: AIC/BIC FLMP 100% 51 % N = 20 LIM 0% 49% MDL FLMP 89% 0% LIM 11 % 100% AIC/BIC FLMP 100% 17% N = 150 LIM 0% 83% MDL FLMP 100% 0% LIM 0% 100% That FLMP is selected over LIM when a method such as AIC was used, even when the data were generated by LIM, suggests that FLMP is more complex than LIM. This observation was confirmed when the geometric complexity of each model was calculated. The difference in geometric complexity between FLMP and LIM was 8.74, meaning that for every distinguishable distribution for which LIM can account, FLMP can describe about e8.74 == 6248 distinguishable distributions. Obviously, this difference in complexity between the two models must be due to the functional form because they have the same number of parameters. 3.2 Categorization Two models of categorization were considered in the present demonstration. They were the generalized context model (GCM: [10]) and the prototype model (PRT: [11]). Each model assumes that categorization responses follow a multinomial probability distribution with Pii (probability of category C] response given stimulus Xi), which is given by ~ S .· S i.... j ee} IJ if GCM: Pu = ~ ~ ; PRT: Pu = ~ I... K I... keCK Sik I... K SiK In the equation, sij is a similarity measure between multidimensional stimuli Xi and Xj , SiJ is a similarity measure between stimulus Xi and the prototypic stimulus X j of category Cj • Similarity is measured using the Minkowski distance metric with the metric parameter r. The two models were fitted to data sets generated by each model using the six-dimensional scaling solution from Experiment 1 of [12] under the Euclidean distance metric of r = 2. As shown in Table 2, under AIC or SIC, a relatively small bias toward choosing GCM was found using data generated from PRT when N = 20. When MDL was used to choose between the two models, there was improvement over AIC in correcting the bias. In the larger sample size condition, there was no difference in model recovery rate between AIC and MDL. This outcome contrasts with that of the preceding example, in which MDL was generally superior to the other selection methods when sample size was smallest. T bl 2 M d I R a e o e ecovery R fTC ates or wo ategoflzatlOn M d I o e s Sample Selection Data from: GCM PRT Size Method Model fitted: AIC/SIC GCM 98% 15% N = 20 PRT 2% 85% MDL GCM 96% 7% PRT 4% 93% AIC/SIC GCM 99% 1% N = 150 PRT 1% 99% MDL GCM 99% 1% PRT 1% 99% On the face of it, these findings would suggest that MDL is not much better than the other selection methods. After all, what else could cause this result? The only circumstances in which such an outcome is predicted under MDL is when the functional forms of the two models are similar (recall that the models have the same number of parameters), thus minimizing the differential contribution of functional form in the complexity term. Calculation of the geometric complexity of each model confirmed this suspicion. GCM is indeed only slightly more complex than PRT, the difference being equal to 0.60, so GCM can describe about two distributions (eO.60 == 1.8) for every distribution PRT can describe. These simulation results together demonstrate usefulness of MDL and the geometric complexity measure in testing models of cognition. MDL's sensitivity to functional form was clearly demonstrated in its superior model recovery rate, especially when the complexities of the models differed by a nontrivial amount. 4 Conclusion Model selection in cognitive science can proceed far more confidently with a clear understanding of why one model should be preferred over another. A geometric interpretation of MDL helps to achieve this goal. The work carried out thus far indicates that MDL, along with the geometric complexity measure, holds considerable promise in evaluating computational models of cognition. MDL chooses the correct model most of the time, and geometric complexity provides a measure of how different the two models are in their capacity or power. Future work is directed toward extending this approach to other classes of models, such as connectionist networks. Acknowledgment and Authors Note M.A.P. and U.M. were supported by NIMH Grant MH57472. V.B. was supported by the Society of Fellows and the Milton Fund of Harvard University, by NSF grant NSF-PHY-9802709 and by the DOE grant DOE-FG02-95ER40893. The present work is based in part on [5] and [13]. References [1] Rissanen, J. (1996) Fisher information and stochastic complexity. IEEE Transaction on Information Theory, 42, 40-47. [2] Balasubramanian, V. (1997) Statistical inference, Occam's razor and statistical mechanics on the space of probability distributions. Neural Computation, 9, 349368. [3] MacKay, D. J. C. (1992). Bayesian interpolation. Neural Computation, 4, 415447. [4] Amari, S. I. (1985) Differential Geometrical Methods in Statistics. SpringerVerlag. [5] Myung, I. J., Balasubramanian, V., & Pitt, M. A. (1999) Counting probability distributions: Differential geometry and model selection. Proceedings of the National Academy of Sciences USA, 97, 11170-11175. [6] Akaike, H. (1973) Information theory and an extension of the maximum likelihood principle, in B. N. Petrox and F. Caski, Second international symposium on information theory, pp. 267-281. Akademiai Kiado, Budapest. [7] Schwarz, G. (1978) Estimating the dimension of a model. The Annals of Statistics, 6, 461-464. [8] Oden, G. C., & Massaro, D. W. (1978) Integration of featural information in speech perception. Psychological Review, 85,172-191. [9] Anderson, N. H. (1981) Foundations of Information Integration Theory. Academic Press. [10] Nosofsky, R. M. (1986) Attention, similarity and the identificationcategorization relationship. Journal of Experimental Psychology: General, 115, 3957. [11] Reed, S. K. (1972) Pattern recognition and categorization. Cognitive Psychology, 3,382-407. [12] Shin, H. J., & Nosofsky, R. M. (1992) Similarity-scaling studies of dot-patten classification and recognition. Journal of Experimental Psychology: General, 121, 278-304. [13] Pitt, M. A., Myung, I. J., & Zhang, S. (2000). Toward a method of selecting among computational models of cognition. Submitted for publication.
2000
95