index
int64 0
20.3k
| text
stringlengths 0
1.3M
| year
stringdate 1987-01-01 00:00:00
2024-01-01 00:00:00
| No
stringlengths 1
4
|
|---|---|---|---|
1,700
|
Bayesian Map Learning in Dynamic Environments Kevin P. Murphy Computer Science Division University of California Berkeley, CA 94720-1776 murphyk@cs.berkeley.edu Abstract We consider the problem of learning a grid-based map using a robot with noisy sensors and actuators. We compare two approaches: online EM, where the map is treated as a fixed parameter, and Bayesian inference, where the map is a (matrix-valued) random variable. We show that even on a very simple example, online EM can get stuck in local minima, which causes the robot to get "lost" and the resulting map to be useless. By contrast, the Bayesian approach, by maintaining multiple hypotheses, is much more robust. We then introduce a method for approximating the Bayesian solution, called Rao-Blackwellised particle filtering. We show that this approximation, when coupled with an active learning strategy, is fast but accurate. 1 Introduction The problem of getting mobile robots to autonomously learn maps of their environment has been widely studied (see e.g., [9] for a collection of recent papers). The basic difficulty is that the robot must know exactly where it is (a problem called localization), so that it can update the right part of the map. However, to know where it is, the robot must already have a map: relying on dead-reckoning alone (Le., integrating the motor commands) is unreliable because of noise in the actuators (slippage and drift). One obvious solution is to use EM, where we alternate between estimating the location given the map (the E step), and estimating the map given the location (the M step). Indeed, this approach has been successfully used by several groups [8, 11, 12]. However, in all of these works, the trajectory of the robot was specified by hand, and the map was learned off-line. For fully autonomous operation, and to cope with dynamic environments, the map must be learned online. We consider two approaches to online learning: online EM, and Bayesian inference, 1016 K. P Murphy a c Figure 1: (a) The POMDP represented as a graphical model. L t is the location, Mt(i) is the label of the i'th grid cell, At is the action, and Zt is the observation. Dotted circles denote variables that EM treats as parameters. (b) A one-dimensional grid with binary labels (white = 0, black = 1). (c) A two-dimensional grid, with four labels (closed doors, open doors, walls, and free space). where we treat the map as a random variable. In Section 3, we show that the Bayesian approach can lead to much better results than online EM; unfortunately, it is computationally intractable, so in Section 4, we discuss an approximation based on Rao-BIackwellised particle filtering. 2 The model We now precisely define the model that we will use in this paper; it is similar to, but much simpler than, the occupancy grid model in [12]. The map is defined to be a grid, where each cell has a label which represents what the robot would see at that point. More formally, the map at time t is a vector of discrete random variables, Mt (i) E {I, ... , No}, where 1 ::; i ::; N L. Of course, the map is not observed directly, and nor is the robot's location, Lt E {I, ... , NL}. What is observed is Zt E {l, ... ,No}, the label of the cell at the robot's current location, and At E {I, ... ,N A}, the action chosen by the robot just before time t. The conditional independence assumptions we are making are illustrated in Figure l(a). We start by considering the very simple one-dimensional grid shown in Figure l(b), where there are just two actions, move right (-+) and move left (f-), and just two labels, off (0) and on (1). This is sufficiently small that we can perform exact Bayesian inference. Later, we will generalize to two dimensions. The prior for the location is a delta function with all its mass on the first (left-most) cell, independent of AI. The transition model for the location is as follows. { Pa if j = i + 1, j < N P ( ·1 . A ) 1 - Pa if j = i, j < N r Lt = J Lt-I =~, t =-+ = 1 if j = i = N o otherwise where Pa is the probability of a successful action, i.e., 1 - Pa is the probability that the robot's wheels slip. There is an analogous equation for the case when At =f-. Note that it is not possible to pass through the "rightmost" cell; the robot can use this information to help localize itself. The prior for the map is a product of the priors for each cell, which are uniform. (We could model correlation between neighboring cells using a Markov Random Field, although this is computationally expensive.) The transition model for the map is a product of the transition models for each cell, which are defined as follows: Bayesian Map Learning in Dynamic Environments 1017 the probability that a 0 becomes a 1 or vice versa is Pc (probability of change), and hence the probability that the cell label remains the same is 1 - Pc. Finally, the observation model is Pr(Zt = klMt = (mI , ... , mNL), Lt = i) = { Po 1- Po if mi = k otherwise where Po is the probability of a succesful observation, Le., 1 - Po is the probability of a classification error. Another way of writing this, that will be useful later, is to introduce the dummy deterministic variable, Z:, which has the following distribution: Pr(Z: = klMt = (mI, ... ,mNL),Lt = i) = 8(k,mi) , where 8(a, b) = 1 if a = b and is 0 otherwise. Thus Z: acts just like a multiplexer, selecting out a component of Mt as determined by the "gate" Lt. The output of the multiplexer is then passed through a noisy channel, which flips bits with probability 1 - Po, to produce Zt. 3 Bayesian learning compared to EM For simplicity, we assume that the parameters Po, Pa and Pc, are all known. (In this section, we use Po = 0.9, Pa = 0.8 and Pc = 0, so the world is somewhat "slippery", but static in appearance.) The state estimation problem is to compute the belief state Pr(Lt, MtIYl:t), where Yt = (Zt, At) is the evidence at time t; this is equivalent to performing online inference in the graphical model shown in Figure 1(a). Unfortunately, even though we have assumed that the components of M t are a priori independent, they become correlated by virtue of sharing a common child, Zt. That is, since the true location of the robot is unknown, all of the cells are possible causes of the observation, and they "compete" to "explain" the data. Hence all of the hidden variables become coupled, and the belief state has size O(NL2NL). If the world is static (Le., Pc = 0) , we can treat M as a fixed, but unknown, parameter; this can then be combined with the noisy sensor model to define an HMM with the following observation matrix: B(i, k) ~ Pr(Zt = kiLt = i; M) = L Pr(Zt = klZ: = j)8(M(i),j) j We can then learn B using EM, as in [8, 11, 12]. (We assume for now that the HMM transition matrix is independent of the map, and encodes the known topology of the grid, Le., the robot can move to any neighboring cell, no matter what its label is. We will lift this restriction in the 2D example. We can formulate an online version of EM as follows. We use fixed-lag s"moothing with a sliding window of length W, and compute the expected sufficient statistics (ESS) for the observation matrix within this window as follows: Ot(i, k) = 2:~=t-W : Z," =k LT1t(i) , where LTlt(i) = Pr(LT = iIYl:t)· We can compute L using the forwards-backwards algorithm, using Lt-W-Ilt-I as the prior. (The initial condition is L = 11", where 11" is the (known) prior for Lo.) Thus the cost per time step is O(2W Nl). In the M step, we normalize each row of Ot + d x Ot-l, where 0 < d < 1 is a decay constant, to get the new estimate of B . We need to downweight the previous ESS since they were computed using out-of-date parameters; in addition, exponential forgetting allows us to handle dynamic environments. [1] discuss some variations on this algorithm. 1018 K. P. Murphy __ ' .... ......,.'I!'. _al_ ... , ....... ,,' ... ~Ir! ' . lei a b c d Figure 2: (a) The full joint posterior on P(Mt !Yl:t). 0 and 255, on the axis into the page, represent the maps where every cell is off and every cell is on, respectively; the mode at t = 16 is for map 171, which corresponds to the correct pattern 01010101. (b-d) Estimated map. Light cells are more likely to contains Os, so the correct pattern should have light bars in the odd rows. (b) The marginals of the exact joint. (c) Online EM. (d) Omine EM. As the window length increases, past locations are allowed to look at more and more future data, and hence their estimates become more accurate; however, the space and time requirements increase. Nevertheless, there are occasions when even the maximum window size (i.e., looking all the way back to 'T = 0) will perform poorly, because of the greedy hill-climbing nature of EM. For a simple example of this, consider the environment shown in Figure 1 (b). Suppose the robot starts in cell 1, keeps going right until it comes to the end of the "corridor", and then heads back "home". Suppose further that there is a single slippage error at t = 4, so the actual path and observation sequence of the robot is as follows: t 1 2 L t 1 2 Zt 0 1 At --7 3 4 3 4 o 1 --7 --7 5 6 7 8 456 7 101 0 --7 --7 --7 --7 9 10 11 876 101 +++12 13 14 15 16 54321 01010 +++++To study the effect of this sequence, we computed Pr(Mt, Lt!Yl:t) by applying the junction tree algorithm to the graphical model in Figure l(a). We then marginalized out Lt to compute the posterior P(Mt ): see Figure 2(a). At t = 1, there are 27 modes, corresponding to all possible bit patterns on the unobserved cells. At each time step, the robot thinks it is moving one step to the right. Hence at t = 8, the robot thinks it is in cell 8, and observes O. When it tries to move rightf it knows it will remain in cell 8 (since the robot knows where the boundaries are). Hence at t = 9, it is almost 70% confident that it is in cell 8. At t = 9, it observes a 1, which contradicts its previous observation of O. There are two possible explanations: this is a sensor error, or there was a motor error. Which of these is more likely depends on the relative values of the sensor noise, Po, and the system noise, Pa. In our experiments, we found that the motor error hypothesis is much more likely; hence the mode of the posterior jumps from the wrong map (in which M(5) = 1) to the right map (in which M(5) = 0). Furthermore, as the robot returns to "familiar territory", it is able to better localize itself (see Figure 3(a)), and continues to learn the map even for far-away cells, because they are all correlated (in Figure 2(b), the entry for cell 8 becomes sharper even as the robot returns to cell 1) We now compare the Bayesian solution with EM. Online EM with no smoothing was not able to learn the correct map. Adding smoothing with the maximum window size of Wt = t did not improve matters: it is still unable to escape the local Bayesian Map Learning in Dynamic Environments 1019 I a b c Figure 3: Estimated location. Light cells are more likely to contain the robot. (a) Optimal Bayes solution which marginalizes out the map. (b) Dead-reckoning solution which ignores the map. Notice how "blurry" it is. (c) Online EM solution using fixed-lag smoothing with a maximal window length. minimum in which M(5) = 1, as shown in Figure 2(c). (We tried various values of the decay rate d, from 0.1 to 0.9, and found that it made little difference.) With the wrong map, the robot "gets lost" on the return journey: see Figure 3(c). Offline EM, on the other hand, does very well, as shown in Figure 2(d); although the initial estimate oflocation (see Figure 3(b)) is rather diffuse, as it updates the map it can use the benefit of hindsight to figure out where it must have been. 4 Rao-Blackwellised particle filtering Although the Bayesian solution exhibits some desirable properties, its running time is exponential in the size of the environment. In this section, we discuss a sequential Monte Carlo algorithm called particle filtering (also known as sm filtering, the bootstrap filter, the condensation algorithm, survival of the fittest, etc; see [10, 4] for recent reviews). Particle filtering (PF) has already been successfully applied to the problem of (global) robot localization [5]. However, in that case, the state space was only of dimension 3: the unknowns were the position of the robot, (x, y) E lR?, and its orientation, () E [0,211"]. In our case, the state space is discrete and of dimension 0(1 + NL), since we need to keep track of the map as well as the robot's location (we ignore orientation in this paper). Particle filtering can be very inefficient in high-dimensional spaces. The key observation which makes it tractable in this context is that, if Ll:t were known, then the posterior on M t would be factored; hence M t can be marginalized out analytically, and we only need to sample Lt. This idea is known in the statistics literature as RaoBlackwellisation [10, 41. In more detail, we will approximate the posterior at time t using a set of weighted particles, where each particle specifies a trajectory L1:t , and the corresponding conditionally factored representation of P(Mt) = TIi P(Mt(i)); we will denote the j'th particle at time t as bF). Note that we do not need to actually store the complete trajectories Ll:t: we only need the most recent value of L. The approach we take is essentially the same as the one used in the conditional linear Gaussian models of [4, 3], except we replace the Kalman filter update with one which exploits the conditionally factored representation of P(Mt ). In particular, the algorithm is as follows: For each particle j = 1, ... , N s , we do the following: 1. Sample L~~l from a proposal distribution, which we discuss below. 2. Update each component of the map separately using L~~l and Zt+1 Pr(Mt~lIL~~l = i,bP),Zt+l) oc Pr(zt+1IMt~l(i)) rrPr(Mi~l(k)IMF)(k)) k 1020 K. P. Murphy IIK _ _ ... '~I ~ I. I I I a b c d Figure 4: (a-b) Results using 50 particles. (c-d) Results using BK. . (j) (j) (j) (j) . 3. Update the weIghts: Wt+l = ut+1 wt ,where Ut+l IS defined below. We then res ample Ns particles from the normalised weights, using Liu's residual resampling algorithm [10], and set WWl = 1/ Ns for all j. We consider two proposal distributions. The first is a simple one which just uses the transition model to predict the new location: Pr(Lt+1lb~j), at+1). In this case, the incremental weight is U~~l <X P(zt+1IL~~l,b~j)). The optimal proposal distribution (the one which minimizes the variance of the importance weights) takes the most recent evidence into account, and can be shown to have the form Pr(Lt+1lb~j), at+l, Zt+l) with incremental weight Ut+1 <X P(Zt+1lb~j)) . Computing this requires marginalizing out Mt+l and Lt+l' which can be done in O(NL) time (details omitted). In Figure 4, we show the results of applying the above algorithm to the same problem as in Section 3; it can be seen that it approximates the exact solution- very closely, using only 50 particles. The results shown are for a particular random number seed; other seeds produce qualitatively very similar results, indicating that 50 particles is in fact sufficient in this case. Obviously, as we increase the number of particles, the error and variance decrease, but the running time increases (linearly). The question of how many particles to use is a difficult one: it depends both on the noise parameters and the structure of the environment (if every cell has a unique label, localization is easy). Since we are sampling trajectories, the number of hypotheses, and hence the number of particles needed, grows exponentially with time. In the above example, the robot was able to localize itself quite accurately when it reached the end of the corridor, where most hypotheses "died off". In general, the number of particles will depend on the length of the longest cycle in the environment, so we will need to use active learning to ensure tractability. In the dynamic two-dimensional grid world of Figure l(c), we chose actions so as to maximize expected discounted reward (using policy iteration), where the reward for visiting cell i is where H(·) is the normalized entropy. Hence, if the robot is "lost", so H(Lt ) ~ 1, the robot will try to visit a cell which it is certain about (see [6] for a better approach); otherwise, it will try to explore uncertain cells. After learning the map, the robot spends its time visiting each of the doors, to keep its knowledge of their state (open or closed) up-to-date. We now briefly consider some alternative approximate inference algorithms. Examining the graphical structure of our model (see Figure l(a)), we see that it is identical Bayesian Map Learning in Dynamic Environments 1021 to a Factorial HMM [7] (ignoring the inputs). Unfortunately, we cannot use their variational approximation, because they assume a conditional Gaussian observation model, whereas ours is almost deterministic. Another popular approximate inference algorithm for dynamic Bayes nets (DBNs) is the "BK algorithm" [2, 1]. This entails projecting the joint posterior at time t onto a product-of-marginals representation P(Lt, Mt(1) , . . . , Mt(NdIYl:t) = P(LtIYl:t) II P(Mt(i)IYl:t) i and using this as a factored prior for Bayesian updating at time t + 1. Given a factored prior, we can compute a factored posterior in O(NL) time by conditioning on each Lt+1, and then averaging. We found that the BK method does very poorly on this problem (see Figure 4), because it ignores correlation between the cells. Of course, it is possible to use pairwise or higher order marginals for tightly coupled sets of variables. Unfortunately, the running time is exponential in the size of the largest marginal, and in our case, all the Mt(i) variables are coupled. Acknowledgments I would like to thank Nando de Freitas for helping me get particle filtering to work, Sebastian Thrun for an interesting discussion at the conference, and Stuart Russell for encouraging me to compare to EM. This work was supported by grant number ONR N00014-97-1-0941. References [1) X. Boyen and D. Koller. Approximate learning of dynamic models. In NIPS, 1998. [2) X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In UAI, 1998. [3) R. Chen and S. Liu. Mixture Kalman filters. Submitted, 1999. [4) A. Doucet, S. Godsill, and C. Andrieu. On sequential Monte Carlo sampling methods for Bayesian filtering. Statistics and Computing, 1999. [5) D. Fox, W. Burgard, F. Dellaert, and S. Thrun. Monte carlo localization: Efficient position estimation for mobile robots. In AAAI, 1999. [6) D. Fox, W . Burgard, and S. Thrun. Active Markov localization for mobile robots. Robotics and Autonomous Systems, 1998. [7] Z. Ghahramani and M. Jordan. Factorial Hidden Markov Models. Machine Learning, 29:245- 273, 1997. [8) S. Koenig and R. Simmons. Unsupervised learning of probabilistic models for robot navigation. In ICRA, 1996. [9] D. Kortenkamp, R. Bonasso, and R. Murphy, editors. Artificial Intelligence and Mobile Robots: case studies of successful robot systems. MIT Press, 1998. [10] J . Liu and R. Chen. Sequential monte carlo methods for dynamic systems. JASA , 93:1032-1044, 1998. [11) H. Shatkay and L. P. Kaelbling. Learning topological maps with weak local odometric information. In IlCAI, 1997. [12) S. Thrun, W . Burgard, and D. Fox. A probabilistic approach to concurrent mapping and localization for mobile robots. Machine Learning, 31:29- 53, 1998.
|
1999
|
52
|
1,701
|
Better Generative Models for Sequential Data Problems: Bidirectional Recurrent Mixture Density Networks Mike Schuster ATR Interpreting Telecommunications Research Laboratories 2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, JAPAN gustl@itl.atr.co.jp Abstract This paper describes bidirectional recurrent mixture density networks, which can model multi-modal distributions of the type P(Xt Iyf) and P(Xt lXI, X2 , ... ,Xt-l, yf) without any explicit assumptions about the use of context. These expressions occur frequently in pattern recognition problems with sequential data, for example in speech recognition. Experiments show that the proposed generative models give a higher likelihood on test data compared to a traditional modeling approach, indicating that they can summarize the statistical properties of the data better. 1 Introduction Many problems of engineering interest can be formulated as sequential data problems in an abstract sense as supervised learning from sequential data, where an input vector (dimensionality D) sequence X = xf = {X!,X2, .. . ,XT_!,XT} living in space X has to be mapped to an output vector (dimensionality J<) target sequence T = tf = {tl' t 2, ... , tT -1 , tT} in space1 y, that often embodies correlations between neighboring vectors Xt, Xt+l and tt, tt+l. In general there are a number of training data sequence pairs (input and target), which are used to estimate the parameters of a given model structure, whose performance can then be evaluated on another set of test data pairs. For many applications the problem becomes to predict the best sequence Y* given an arbitrary input sequence X, with , best' meaning the sequence that minimizes an error using a suitable metric that is yet to be defined . Making use of the theory of pattern recognition [2] this problem is often simplified by treating any sequence as one pattern. This makes it possible to express the objective of sequence prediction with the well known expression y* = arg maxy P(YIX), with X being the input sequence, Y being any valid output sequence and y* being the predicted sequence with the highest probability2 1 a sample sequence of the training target data is denoted as T, while an output sequence in general is denoted as Y, both live in the output space Y 2to simplify notation, random variables and their values, are not denoted as different symbols. This means, P(x) = P(X = x). 590 M Schuster among all possible sequences. Training of a sequence prediction system corresponds to estimating the distribution 3 P(YIX) from a number of samples which includes (a) defining an appropriate model representing this distribution and (b) estimating its parameters such that P(YIX) for the training data is maximized. In practice the' model consists of several modules with each of them being responsible for a different part of P(YIX). Testing (usage) of the trained system or recognition for a given input sequence X corresponds principally to the evaluation of P(YIX) for all possible output sequences to find the best one Y*. This procedure is called the search and its efficient implementation is important for many applications. In order to build a model to predict sequences it is necessary to decompose the sequences such that modules responsible for smaller parts can be build. An often used approach is the decomposition into a generative and prior model part, using P(BIA) = P(AIB)P(B)/ P(A) and P(A, B) = P(A)P(BIA), as: Y* arg maxP(YIX) = arg maxP(XIY)P(Y) y y T T arg max [II P(XtIXl,X2, . .. ,Xt-l,yn] [II P(YtIYI,Y2, ... ,Yt-d](1) Y t=l t=l , '" ~, v.------' generative part prior part For many applications (1) is approximated by simpler expressions, for example as a first order Markov Model T T Y* ~ arg max [II P(xtIYd] [II P(Yt!Yt-l)] Y t=l t=l (2) making some simplifying approximations. These are for this example: • Every output Yt depends only on the previous output Yt-l and not on all previous outputs: P(YtIYI,Y2,'" ,Yt-d => P(YtIYt-d (3) • The inputs are assumed to be statistically independent in time: P(XtIXI, X2, .. ·, Xt-l. yf) => P(Xt Iyn (4) • The likelihood of an input vector Xt given the complete output sequence y[ is assumed to depend only on the output found at t and not on any other ones: (5) Assuming that the output sequences are categorical sequences (consisting of symbols), approximation (2) and derived expressions are the basis for many applications. For example, using Gaussian mixture distributions to model P(Xtlye) = Pk(X) V Ii occuring symbols, approach (2) is used in a more sophisticated form in most stateof-the-art speech recognition systems. Focus of this paper is to present some models for the generative part of (1) which need less assumptions. Ideally this means to be able to model directly expressions of the form P(XtIX},X2, ... ,Xt-l,yn, the possibly (multi-modal) distribution of a vector conditioned on previous x vectors Xt, Xt-l, ... , Xl and a complete sequence yi, as shown in the next section. 3 there is no distinction made between probability mass and density, usually denoted as P and p, respectively. If the quantity to model is categorical, a probability mass is assumed, if it is continuous, a probability density is assumed. Bidirectional Recurrent Mixture Density Networks 591 2 Mixture density recurrent neural networks Assume we want to model a continuous vector sequence, conditioned on a sequence of categorical variables as shown in Figure 1. One approach is to assume that the vector sequence can be modeled by a uni-modal Gaussian distribution with a constant variance, making it a uni-modal regression problem. There are many practical examples where this assumption doesn't hold, requiring a more complex output distribution to model multi-modal data. One example is the attempt to model the sounds of phonemes based on data from multiple speakers. A certain phoneme will sound completely different depending on its phonetic environment or on the speaker, and using a single Gaussian with a constant variance would lead to a crude averaging of all examples. The traditional approach is to build generative models for each symbol separately, as suggested by (2). If conventional Gaussian mixtures are used to model the observed input vectors , then the parameters of the distribution (means, covariances, mixture weights) in general do not change with the temporal position of the vector to model within a given state segment of that symbol. This can be a bad representation for the data in some areas (shown are here the means of a very bi-modal looking distribution), as indicated by the two shown variances for the state 'E'. When used to model speech, a procedure often used to cope with this problem is to increase the number of symbols by grouping often appearing symbol sub-strings into a new symbol and by subdividing each original symbol into a number of states. L-________________________ ~~TINffi KKKEEEEEEEEEEmmrrmKKKOOOOOOOOOo KKKEEEEEEEEEEIUUIumKKKOOOOOooooo Figure 1: Conventional Gaussian mixtures (left) and mixture density BRNNs (right) for multi-modal regression Another alternative is explored here, where all parameters of a Gaussian mixture distribution modeling the continuous targets are predicted by one bidirectional recurrent neural network, extended to model mixture densities conditioned on a complete vector sequence, as shown on the right side of Figure 1. Another extension (section 2.1) to the architecture allows the estimation of time varying mixture densities conditioned on a hypothesized output sequence and a continuous vector sequence to model exactly the generative term in (1) without any explicit approximations about the use of context. Basics of non-recurrent mixture density networks (MLP type) can be found in [1][2] . The extension from uni-modal to multi-modal regression is somewhat involved but straightforward for the two interesting cases of having a radial covariance matrix or a diagonal covariance matrix per mixture component. They are trained with gradientdescent procedures as regular uni-modal regression NNs. Suitable equations to calculate the error that is back-propagated can be found in [6] for the two cases mentioned, a derivation for the simple case in [1][2]. Conventional recurrent neural networks (RNNs) can model expressions of the form P( Xt iYl , Y2 , ... ,Yt), the distribution of a vector given an input vector plus its past input vectors. Bidirectional recurrent neural networks (BRNNs) [5][6] are a simple 592 M. Schuster extension of conventional RNNs. The extension allows one to model expressions of the form P(xtlyi), the distribution of a vector given an input vector plus its past and following input vectors. 2.1 Mixture density extension for BRNN s Here two types of extensions of BRNNs to mixture density networks are considered: I) An extension to model expressions of the type P( Xt Iyi), a multi-modal distribution of a continuous vector conditioned on a vector sequence y[, here labeled as mixture density BRNN of Type 1. II) An extension to model expressions of the type P(XtlXt,X2,'" ,Xt-l,yf), a probability distribution of a continuous vector conditioned on a vector sequence y[ and on its previous context in time Xl,X2, ... ,Xt-l. This architecture is labeled as mixture density BRNN of Type II. The first extension of conventional uni-modal regression BRNNs to mixture density networks is not particularly difficult compared to the non-recurrent implementation, because the changes to model multi-modal distributions are completely independent of the structural changes that have to be made to form a BRNN. The second extension involves a structural change to the basic BRNN structure to incorporate the Xl, X2, ... ,Xt-l as additional inputs, as shown in Figure 2. For any t the neighboring Xt-l. Xt-2, ... are incorporated by adding an additional set of weights to feed the hidden forward states with the extended inputs (the targets for the outputs) from the time step before. This includes Xt-l directly and Xt-2, Xt-3, ... Xl indirectly through the hidden forward neurons. This architecture allows one to estimate the generative term in (1) without making the explicit assumptions (4) and (5), since all the information Xt is conditioned on, is theoretically available. 1-1 1+1 Figure 2: BRNN mixture density extension (Type II) (inputs: striped, outputs: black, hidden neurons: grey, additional inputs: dark grey). Note that without the backward states and the additional inputs this structure is a conventional RNN, unfolded in time. Different from non-recurrent mixture density networks, the extended BRNNs can predict the parameters of a Gaussian mixture distribution conditioned on a vector sequence rather than a single vector, that is, at each (time) position t one parameter set (means, variances (actually standard variations), mixture weights) conditioned on y[ for the BRNN of type I and on Xl. X2 , ... ,Xt-l, y[ for the BRNN of type II. Bidirectional Recurrent Mixture Density Networks 593 3 Experiments and Results The goal of the experiments is to show that the proposed models are more suitable to model speech data than traditional approaches, because they rely on fewer assumptions. The speech data used here has observation vector sequences representing the original waveform in a compressed form, where each vector is mapped to exactly one out of f{ phonemes. Here three approaches are compared, which allow the estimation of the likelihood P(XIY) with various degrees of approximations: Conventional Gaussian mixture model, P(XIY) ~ 0;=1 P(xtIYt): According to (2) the likelihood of a phoneme class vector is approximated by a conventional Gaussian mixture distribution, that is, a separate mixture model is built to estimate P(xly) = PI;(X) for each of the possible f{ categorical states in y . In this case the two assumptions (4) and (5) are necessary. For the variance a radial covariance matrix (diagonal single variance for all vector components) is chosen to match it to the conditions for the BRNN cases below. The number of parameters for the complete model is f{ M(D + 2) for M > 1. Several models of different complexity were trained (Table 1). Mixture density BRNN I, P(XIY) ~ 0;=1 P(xtiy[): One mixture density BRNN of type I, with the same number of mixture components and a radial covariance matrix for its output distribution as in the approach above, is trained by presenting complete sample sequences to it. Note that for type I all possible context-dependencies (assumption (5» are automatically taken care of, because the probability is conditioned on complete sequences yi . The sequence yi contains for any t not only the information about neighboring phonemes, but also the position of a frame within a phoneme. In conventional systems this can only be modeled crudely by introducing a certain number of states per phoneme. The number of outputs for the network depends on the number of mixture components and is M(D + 2). The total number of parameters can be adjusted by changing the number of hidden forward and backward state neurons, and was set here to 64 each. Mixture density BRNN II, P(XIY) = 0;-1 P(xtixl,X2 , ... ,Xt-l ,yf): One mixture density BRNN of type II, again with the same number of mixture components and a radial covariance matrix, is trained under the same conditions as above. Note that in this case both assumptions (4) and (5) are taken care of, because exactly expressions of the required form can be modeled by a mixture density BRNN of type II. 3.1 Experiments The recommended training and test data of the TIMIT speech database [3] was used for the experiments. The TIMIT database comes with hand-aligned phonetic transcriptions for all utterances, which were transformed to sequences of categorical class numbers (training = 702438, test = 256617 vec.). The number of possible categorical classes is the number of phonemes, f{ = 61. The categorical data (input data for the BRNNs) is represented as f{-dimensional vectors with the kth component being one and all others zero. The feature extraction for the waveforms, which resulted in the vector sequences xi to model, was done as in most speech recognition systems [7]. The variances were normalized with respect to all training data, such that a radial variance for each mixture component in the model is a reasonable choice. 594 M. Schuster All three model types were trained with M = 1,2,3,4, the conventional Gaussian mixture model also with M = 8,16 mixture components. The number of resulting parameters, used as a rough complexity measure for the models, is shown in Table 1. The states of the triphone models were not clustered. Table 1: Number of parameters for different types of models mixture mon061 mon061 tri571 BRNN I BRNN II components I-state 3-state 3-state 1 1952 5856 54816 20256 22176 2 3904 11712 109632 24384 26304 3 5856 17568 164448 28512 30432 4 7808 23424 219264 32640 34560 8 15616 46848 438528 16 31232 93696 877056 Training for the conventional approach using M mixtures of Gaussians was done using the EM algorithm. For some classes with only a few samples M had to be reduced to reach a stationary point of the likelihood. Training of the BRNNs of both types must be done using a gradient descent algorithm. Here a modified version of RPROP [4] was used, which is in more detail described in [6]. The measure used in comparing the tested approaches is the log-likelihood of training and test data given the models built on the training data. In absence of a search algorithm to perform recognition this is a valid measure to evaluate the models since maximizing log-likelihood on the training data is the objective for all model types. Note that the given alignment of vectors to phoneme classes for the test data is used in calculating the log-likelihood on the test data - there is no search for the best alignment. 3.2 Results Figure 3 shows the average log-likelihoods depending on the number of mixture components for all tested approaches on training (upper line) and test data (lower line). The baseline I-state monophones give the lowest likelihood. The 3-state monophones are slightly better, but have a larger gap between training and test data likelihood. For comparison on the training data a system with 571 distinct triphones with 3 states each was trained also. Note that this system has a lot more parameters than the BRNN systems (see Table 1) it was compared to. The results for the traditional Gaussian mixture systems show how the models become better by building more detailed models for different (phonetic) context, i.e., by using more states and more context classes. The mixture density BRNN of type I gives a higher likelihood than the traditional Gaussian mixture models. This was expected because the BRNN type I models are, in contrast to the traditional Gaussian mixture models, able to include all possible phonetic context effects by removing assumption (5) - i.e. a frame of a certain phoneme surrounded by frames of any other phonemes with theoretically no restriction about the range of the contextual influence. The mixture density BRNN of type II, which in addition removes the independence assumption (4), gives a significant higher likelihood than all other models. Note that the difference in likelihood on training and test data for this model is very small. indicating a useful model for the underlying distribution of the data.
|
1999
|
53
|
1,702
|
Statistical Dynamics of Batch Learning s. Li and K. Y. Michael Wong Department of Physics, Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong Kong {phlisong, phkywong}@ust.hk Abstract An important issue in neural computing concerns the description of learning dynamics with macroscopic dynamical variables. Recent progress on on-line learning only addresses the often unrealistic case of an infinite training set. We introduce a new framework to model batch learning of restricted sets of examples, widely applicable to any learning cost function, and fully taking into account the temporal correlations introduced by the recycling of the examples. For illustration we analyze the effects of weight decay and early stopping during the learning of teacher-generated examples. 1 Introduction The dynamics of learning in neural computing is a complex multi-variate process. The interest on the macroscopic level is thus to describe the process with macroscopic dynamical variables. Recently, much progress has been made on modeling the dynamics of on-line learning, in which an independent example is generated for each learning step [1, 2]. Since statistical correlations among the examples can be ignored, the dynamics can be simply described by instantaneous dynamical variables. However, most studies on on-line learning focus on the ideal case in which the network has access to an almost infinite training set, whereas in many applications, the collection of training examples may be costly. A restricted set of examples introduces extra temporal correlations during learning, and the dynamics is much more complicated. Early studies briefly considered the dynamics of Adaline learning [3, 4, 5], and has recently been extended to linear perceptrons learning nonlinear rules [6, 7}. Recent attempts, using the dynamical replica theory, have been made to study the learning of restricted sets of examples, but so far exact results are published for simple learning rules such as Hebbian learning, beyond which appropriate approximations are needed [8]. In this paper, we introduce a new framework to model batch learning of restricted sets of examples, widely applicable to any learning rule which minimizes an arbitrary cost function by gradient descent. It fully takes into account the temporal correlations during learning, and is therefore exact for large networks. Statistical Dynamics of Batch Learning 287 2 Formulation Consider the single layer perceptron with N » 1 input nodes {~j} connecting to a single output node by the weights {Jj }. For convenience we assume that the inputs ~j are Gaussian variables with mean 0 and variance 1, and the output state 5 is a function f(x) of the activation x at the output node, i.e. 5=f(x); x=J·{. (1) The network is assigned to "learn" p = aN examples which map inputs {{j} to the outputs {5~} (p = 1, ... ,p). 5~ are the outputs generated by a teacher percept ron {Bj }, namely (2) Batch learning by gradient descent is achieved by adjusting the weights {Jj } iteratively so that a certain cost function in terms of the student and teacher activations {x~} and {y~} is minimized. Hence we consider a general cost function (3) The precise functional form of g(x, y) depends on the adopted learning algorithm. For the case of binary outputs, f(x) = sgnx. Early studies on the learning dynamics considered Adaline learning [3, 4, 5], where g(x, y) = -(5 - x)2/2 with 5 = sgny. For recent studies on Hebbian learning [8], g(x,y) = x5. To ensure that the perceptron is regularized after learning, it is customary to introduce a weight decay term. Furthermore, to avoid the system being trapped in local minima, noise is often added in the dynamics. Hence the gradient descent dynamics is given by dJj (t) _ 1 "", ( ) ) I' ) ( ) -;u- - N ~g (x~ t ,yl' ~j -).Jj(t +"1j t, (4) ~ where, here and below, g' (x, y) and gil (x, y) respectively represent the first and second partial derivatives of g(x, y) with respect to x. ). is the weight decay strength, and "1j(t) is the noise term at temperature T with (5) 3 The Cavity Method Our theory is the dynamical version of the cavity method [9, 10, 11]. It uses a self-consistency argument to consider what happens when a new example is added to a training set. The central quantity in this method is the cavity activation, which is the activation of a new example for a perceptron trained without that example. Since the original network has no information about the new example, the cavity activation is stochastic. Specifically, denoting the new example by the label 0, its cavity activation at time t is ho(t) = J(t) . f1. (6) For large N and independently generated examples, ho(t) is a Gaussian variable. Its covariance is given by the correlation function G(t, s) of the weights at times t and s, that is, (ho(t)ho(s») = J(t). J(s) == G(t,s), (7) 288 S. Li and K. Y. M. Wong where ~J and ~2 are assumed to be independent for j i- k. The distribution is further specified by the teacher-student correlation R(t), given by (ho(t)yo) = j(t) . jj = R(t). (8) Now suppose the perceptron incorporates the new example at the batch-mode learning step at time s. Then the activation of this new example at a subsequent time t > s will no longer be a random variable. Furthermore, the activations of the original p examples at time t will also be adjusted from {xJl(t)} to {x~(t)} because of the newcomer, which will in turn affect the evolution of the activation of example 0, giving rise to the so-called Onsager reaction effects. This makes the dynamics complex, but fortunately for large p '" N, we can assume that the adjustment from xJl(t) to x2(t) is small, and perturbative analysis can be applied. Suppose the weights of the original and new perceptron at time t are {Jj (t)} and {JJ(t)} respectively. Then a perturbation of (4) yields (! + ,x) (.tj(t) - Jj(t» = ~g'(xo(t),yo)~J + ~ 2: ~fgll(XJl(t), YJl)~r(J'2(t) - Jk(t». (9) Jlk The first term on the right hand side describes the primary effects of adding example o to the training set, and is the driving term for the difference between the two perceptrons. The second term describes the secondary effects due to the changes to the original examples caused by the added example, and is referred to as the On sager reaction term. One should note the difference between the cavity and generic activations of the added example. The former is denoted by ho(t) and corresponds to the activation in the perceptron {Jj (t) }, whereas the latter, denoted by Xo (t) and corresponding to the activation in the percept ron {.tj (t)}, is the one used in calculating the gradient in the driving term of (9). Since their notations are sufficiently distinct, we have omitted the superscript 0 in xo(t), which appears in the background examples x~(t). The equation can be solved by the Green's function technique, yielding .tj(t) - Jj(t) = 2:! dsGjk(t, s) (~g~(s)~2) , k (10) where g~(s) = g'(xo(s),yo) and Gjk(t, s) is the weight Green's function satisfying Gjk(t,S) = G(O)(t - S)6jk + ~ ~! dt'G(O)(t - t')~fg~(t')~rGik(t' - s), (11) Jl\ G(O)(t - s) = e(t - s) exp( -,x(t - s» is the bare Green's function, and e is the step function. The weight Green's function describes how the effects of example 0 propagates from weight Jk at learning time s to weight Jj at a subsequent time t, including both primary and secondary effects. Hence all the temporal correlations have been taken into account. For large N, the equation can be solved by a diagrammatic approach similar to [5]. The weight Green's function is self-averaging over the distribution of examples and is diagonal, i.e. limN-+ooGjk(t,s) = G(t,s)6jk , where G(t, s) = G(O)(t - s) + a ! dt1 ! dt2G(O)(t - td(g~(tdDJl(tl' t2))G(t2' s). (12) Statistical Dynamics of Batch Learning 289 D ~ (t, s) is the example Green's function given by D~(t,s) = c5(t - s) + J dt'G(t,t')g~(t')D~(t',s). (13) This allows us to express the generic activations of the examples in terms of their cavity counterparts. Multiplying both sides of (10) and summing over j, we get xo(t) - ho(t) = J dsG(t, s)g~(s). (14) This equation is interpreted as follows. At time t, the generic activation xo{t) deviates from its cavity counterpart because its gradient term g&(s) was present in the batch learning step at previous times s. This gradient term propagates its influence from time s to t via the Green's function G(t, s). Statistically, this equation enables us to express the activation distribution in terms of the cavity activation distribution, thereby getting a macroscopic description of the dynamics. To solve for the Green's functions and the activation distributions, we further need the fluctuation-response relation derived by linear response theory, C(t, s) = a J dt'G(O) (t - t')(g~(t')x~(s» + 2T J dt'G(O)(t - t')G(s, t'). (15) Finally, the teacher-student correlation is given by R(t) = a J dt'G(O)(t t')(g~(t')y~}. (16) 4 A Solvable Case The cavity method can be applied to the dynamics of learning with an arbitrary cost function. When it is applied to the Hebb rule, it yields results identical to the exact results in [8]. Here we present the results for the Adaline rule to illustrate features of learning dynamics derivable from the study. This is a common learning rule and bears resemblance with the more common back-propagation rule. Theoretically, its dynamics is particularly convenient for analysis since g" (x) = -1, rendering the weight Green's function time translation invariant, Le. G(t, s) = G(t - s). In this case, the dynamics can be solved by Laplace transform. To monitor the progress of learning, we are interested in three performance measures: (a) Training error ft, which is the probability of error for the training examples. It is given by ft = (9 (-xsgnY»:q, , where the average is taken over the joint distribution p(x, y) of the training set. (b) Test error ftest, which is the probability of error when the inputs e; of the training examples are corrupted by an additive Gaussian noise of variance ~ 2 . This is a relevant performance measure when the percept ron is applied to process data which are the corrupted versions of the training data. It is given by ftest = (H(xsgny/~JC(t,t»):r;y. When ~2 = 0, the test error reduces to the training error. (c) Generalization error fg, which is the probability of error for an arbitrary input ~j when the teacher and student outputs are compared. It is given by fg = arccos[R(t)/ JC(t, t»)/7r. Figure l(a) shows the evolution of the generalization error at T = O. When the weight decay strength varies, the steady-state generalization error is minimized at the optimum 7r Aopt = '2 - 1, (17) 290 S. Li and K. Y. M Wong which is independent of Q. It is interesting to note that in the cases of the linear percept ron , the optimal weight decay strength is also independent of Q and only determined by the output noise and unlearn ability of the examples [5, 7]. Similarly, here the student is only provided the coarse-grained version of the teacher's activation in the form of binary bits. For A < Aopt, the generalization error is a non-monotonic function in learning time. Hence the dynamics is plagued by overtraining, and it is desirable to introduce early stopping to improve the perceptron performance. Similar behavior is observed in linear perceptrons [5, 6, 7]. To verify the theoretical predictions, simulations were done with N = 500 and using 50 samples for averaging. As shown in Fig. l(a), the agreement is excellent. Figure 1 (b) compares the generalization errors at the steady-state and the early stopping point. It shows that early stopping improves the performance for A < Aopt, which becomes near-optimal when compared with the best result at A = Aopt. Hence early stopping can speed up the learning process without significant sacrifice in the generalization ability. However, it cannot outperform the optimal result at steadystate. This agrees with a recent empirical observation that a careful control of the weight decay may be better than early stopping in optimizing generalization [12]. 0.40 0.36 w~ .... a=O.5 1..=10 e ~:::::::::=:::;::;;;;;;;mEE:=::: 1..=0.1 ~ 0.32 L 1..=1.. c • . Q iii N ~ 0.28 ~ l.-..a~= ..... I~.2.....-............................. ~.....-.................. _ 1..=10 i I..=Ql 0.24 1..=\", 0.20 -----c... - '--'---~---"--'--~-'--o 2 4 6 8 10 timet 12 0.38 ,0.3s l I ~~0.34~ ~ 0.32 t c " o ~ 0.30 .t:! ~ 0.28 Q) c 00 ~ 0.26 e; 0.24 [--r-hh-0.22 ---'- -_.0.0 0.5 1.0 weight decay ).. a=O.5 a=1.2 1.5 Figure 1: (a) The evolution of the generalization error at T = 0 for Q = 0.5,1.2 and different weight decay strengths A. Theory: solid line, simulation: symbols. (b) Comparing the generalization error at the steady state (00) and at the early stopping point (tes ) for Q = 0.5,1.2 and T = O. In the search for optimal learning algorithms, an important consideration is the environment in which the performance is tested. Besides the generalization performance, there are applications in which the test examples have inputs correlated with the training examples. Hence we are interested in the evolution of the test error for a given additive Gaussian noise 6, in the inputs. Figure 2(a) shows, again, that there is an optimal weight decay parameter Aopt which minimizes the test error. Furthermore, when the weight decay is weak, early stopping is desirable. Figure 2(b) shows the value of the optimal weight decay as a function of the input noise variance 6,2. To the lowest order approximation, Aopt ex: 6,2 for sufficiently large 6, 2. The dependence of Aopt on input noise is rather general since it also holds in the case of random examples [13]. In the limit of small 6,2, Aopt vanishes as 6,2 for Q < 1, whereas Aopt approaches a nonzero constant for Q > 1. Hence for Statistical Dynamics of Batch Learning 291 a < 1, weight decay is not necessary when the training error is optimized, but when the percept ron is applied to process increasingly noisy data, weight decay becomes more and more important in performance enhancement. Figure 2(b) also shows the phase line Aot(~2) below which overtraining occurs. Again, to the lowest order approximation, Aot ex ~2 for sufficiently large ~2 . However, unlike the case of generalization error, the line for the onset of overtraining does not coincide exactly with the line of optimal weight decay. In particular, for an intermediate range of input noise, the optimal line lies in the region of overtraining, so that the optimal performance can only be attained by tuning both the weight decay strength and learning time. However, at least in the present case, computational results show that the improvement is marginal. 0.30 , I i ~A=Ol 028 c . l a=1.2 , A=lO 0.26 :--s 0 I ... r A=3.6 g t Q) 0.24 t ~ l 0.22 ~ 0.18 0---'--~2~~~3----"4~--'5----'6 Time 20 15 .-< 10 5 o.oo ~ -0.05 '-------o Figure 2: (a) The evolution of the test error for ~ 2 = 3, T = 0 and different weight decay strengths A (Aopt ::::: 1.5,3.6 for a = 0.5, 1.2 respectively). (b) The lines of the optimal weight decay and the onset of overtraining for a = 5. Inset: The same data with Aot - Aopt (magnified) versus ~2. 5 Conclusion Based on the cavity method, we have introduced a new framework for modeling the dynamics of learning, which is applicable to any learning cost function, making it a versatile theory. It takes into full account the temporal correlations generated by the use of a restricted set of examples, which is more realistic in many situations than theories of on-line learning with an infinite training set. While the Adaline rule is solvable by the cavity method, it is still a relatively simple model approachable by more direct methods. Hence the justification of the method as a general framework for learning dynamics hinges on its applicability to less trivial cases. In general, g~(t') in (13) is not a constant and DJl(t, s) has to be expanded as a series. The dynamical equations can then be considered as the starting point of a perturbation theory, and results in various limits can be derived, e.g. the limits of small a, large a, large A, or the asymptotic limit. Another area for the useful application of the cavity method is the case of batch learning with very large learning steps. Since it has been shown recently that such learning converges in a few steps [6], the dynamical equations remain simple enough for a meaningful study. Preliminary results along this direction are promising and will be reported elsewhere. 292 S. Li and K. Y. M Wong An alternative general theory for learning dynamics, the dynamical replica theory, has recently been developed [8]. It yields exact results for Hebbian learning, and approximate results for more non-trivial cases. Based on certain self-averaging assumptions, the theory is able to approximate the dynamics by the evolution of single-time functions, at the expense of having to solve a set of saddle point equations in the replica formalism at every learning instant. On the other hand, our theory retains the functions G(t,s) and C(t,s) with double arguments, but develops naturally from the stochastic nature of the cavity activations. Contrary to a suggestion [14], the cavity method can also be applied to the on-line learning with restricted sets of examples. It is hoped that by adhering to an exact formalism, the cavity method can provide more fundamental insights when the studies are extended to more sophisticated multilayer networks of practical importance. The method enables us to study the effects of weight decay and early stopping. It shows that the optimal strength of weight decay is determined by the imprecision in the examples, or the level of input noise in anticipated applications. For weaker weight decay, the generalization performance can be made near-optimal by early stopping. Furthermore, depending on the performance measure, optimality may only be attained by a combination of weight decay and early stopping. Though the performance improvement is marginal in the present case, the question remains open in the more general context. We consider the present work as the beginning of an in-depth study of learning dynamics. Many interesting and challenging issues remain to be explored. Acknowledgments We thank A. C. C. Coolen and D. Saad for fruitful discussions during NIPS. This work was supported by the grant HKUST6130/97P from the Research Grant Council of Hong Kong. References [1] D. Saad and S. Solla, Phys. Rev. Lett. 74,4337 (1995). [2] D. Saad and M. Rattray, Phys. Rev. Lett. 79, 2578 (1997). [3] J. Hertz, A. Krogh and G. I. Thorbergssen, J. Phys. A 22, 2133 (1989). [4] M. Opper, Europhys. Lett. 8, 389 (1989). [5] A. Krogh and J. A. Hertz, J. Phys. A 25, 1135 (1992). [6] S. Bos and M. Opper, J. Phys. A 31, 4835 (1998). [7] S. Bos, Phys. Rev. E 58, 833 (1998). [8] A. C. C. Coolen and D. Saad, in On-line Learning in Neural Networks, ed. D. Saad (Cambridge University Press, Cambridge, 1998). [9] M. Mezard, G. Parisi and M. Virasoro, Spin Glass Theory and Beyond (World Scientific, Singapore) (1987). [10] K. Y. M. Wong, Europhys. Lett. 30, 245 (1995). [11] K. Y. M. Wong, Advances in Neural Information Processing Systems 9, 302 (1997). [12] L. K. Hansen, J. Larsen and T. Fog, IEEE Int. Conf. on Acoustics, Speech, and Signal Processing 4,3205 (1997). [13] Y. W. Tong, K. Y. M. Wong and S. Li, to appear in Proc. of IJCNN'99 (1999) . [14] A. C. C. Cool en and D. Saad, Preprint KCL-MTH-99-33 (1999).
|
1999
|
54
|
1,703
|
Scale Mixtures of Gaussians and the Statistics of Natural Images Martin J. Wainwright Stochastic Systems Group Electrical Engineering & CS MIT, Building 35-425 Cambridge, MA 02139 mjwain@mit.edu Eero P. Simoncelli Ctr. for Neural Science, and Courant Inst. of Mathematical Sciences New York University New York, NY 10012 eero. simoncelli@nyu.edu Abstract The statistics of photographic images, when represented using multiscale (wavelet) bases, exhibit two striking types of nonGaussian behavior. First, the marginal densities of the coefficients have extended heavy tails. Second, the joint densities exhibit variance dependencies not captured by second-order models. We examine properties of the class of Gaussian scale mixtures, and show that these densities can accurately characterize both the marginal and joint distributions of natural image wavelet coefficients. This class of model suggests a Markov structure, in which wavelet coefficients are linked by hidden scaling variables corresponding to local image structure. We derive an estimator for these hidden variables, and show that a nonlinear "normalization" procedure can be used to Gaussianize the coefficients. Recent years have witnessed a surge of interest in modeling the statistics of natural images. Such models are important for applications in image processing and computer vision, where many techniques rely (either implicitly or explicitly) on a prior density. A number of empirical studies have demonstrated that the power spectra of natural images follow a 1/ f'Y law in radial frequency, where the exponent "f is typically close to two [e.g., 1]. Such second-order characterization is inadequate, however, because images usually exhibit highly non-Gaussian behavior. For instance, the marginals of wavelet coefficients typically have much heavier tails than a Gaussian [2]. Furthermore, despite being approximately decorrelated (as suggested by theoretical analysis of 1/ f processes [3]), orthonormal wavelet coefficients exhibit striking forms of statistical dependency [4, 5]. In particular, the standard deviation of a wavelet coefficient typically scales with the absolute values of its neighbors [5]. A number of researchers have modeled the marginal distributions of wavelet coefficients with generalized Laplacians, py(y) ex exp( -Iy/ AlP) [e.g. 6, 7, 8]. Special cases include the Gaussian (p = 2) and the Laplacian (p = 1), but appropriate exResearch supported by NSERC 1969 fellowship 160833 to MJW, and NSF CAREER grant MIP-9796040 to EPS. 856 M J Wainwright and E. P. Simoncelli Mixing density GSM density GSM char. function JZ(t) symmetrized Gamma ( t'l ) 'Y l+w -, ,),>0 Student: l/JZ({3-~) No explicit form [1/(>,2 + y2)]t3, {3>~ Positive, J~ - stable a-stable exp (-IAW~), a E (0,2] No explicit form generalized Laplacian: No explicit form exp (-Iy / AlP), p E (0,2] Table 1. Example densities from the class of Gaussian scale mixtures. Zh) denotes a positive gamma variable, with density p(z) = [l/rh)] z"Y- 1 exp (-z). The characteristic function of a random variable x is defined as ¢",(t) ~ J~oo p(x) exp (jxt) dx. ponents for natural images are typically less than one. Simoncelli [5, 9] has modeled the variance dependencies of pairs of wavelet coefficients. Romberg et al. [10] have modeled wavelet densities using two-component mixtures of Gaussians. Huang and Mumford [11] have modeled marginal densities and cross-sections of joint densities with multi-dimensional generalized Laplacians. In the following sections, we explore the semi-parametric class of Gaussian scale mixtures. We show that members of this class satisfy the dual requirements of being heavy-tailed, and exhibiting multiplicative scaling between coefficients. We also show that a particular member of this class, in which the multiplier variables are distributed according to a gamma density, captures the range of joint statistical behaviors seen in wavelet coefficients of natural images. We derive an estimator for the multipliers, and show that a nonlinear "normalization" procedure can be used to Gaussianize the wavelet coefficients. Lastly, we form random cascades by linking the multipliers on a multiresolution tree. 1 Scale Mixtures of Gaussians A random vector Y is a Gaussian scale mixture (GSM) if Y 4 zU, where 4 denotes equality in distribution; z 2:: ° is a scalar random variable; U f'V N(O, Q) is a Gaussian random vector; and z and U are independent. As a consequence, any GSM variable has a density given by an integral: 1 00 1 ( yTQ-1Y) py(Y) = -00 [21r]~ Iz2Q1 1/ 2 exp 2z2 <Pz(z)dz. where <Pz is the probability density of the mixing variable z (henceforth the multiplier). A special case of a GSM is a finite mixture of Gaussians, where z is a discrete random variable. More generally, it is straightforward to provide conditions on either the density [12] or characteristic function of X that ensure it is a GSM, but these conditions do not necessarily provide an explicit form of <Pz. Nevertheless, a number of well-known distributions may be written as Gaussian scale mixtures. For the scalar case, a few of these densities, along with their associated characteristic functions, are listed in Table 1. Each variable is characterized by a scale parameter A, and a tail parameter. All of the GSM models listed in Table 1 produce heavy-tailed marginal and variance-scaling joint densities. Scale Mixtures ofGaussians and the Statistics of Natural Images 857 baboon boats flower frog -2 -2 -2'r---------, [-y, .x2] = [0.97,15.04] t::.HjH = 0.00079 .... 1-6 lI. 8' ~-6 [0.45, 13.77] 0.0030 -4 ( 8' ~ -8 [0.78,26.83] 0.0030 [0.80, 15.39] 0.0076 Figure 1. GSMs (dashed lines) fitted to empirical histograms (solid lines). Below each plot are the parameter values, and the relative entropy between the histogram (with 256 bins) and the model, as a fraction of the histogram entropy. 2 Modeling Natural Images As mentioned in the introduction, natural images exhibit striking non-Gaussian behavior, both in their marginal and joint statistics. In this section, we show that this behavior is consistent with a GSM, using the first of the densities given in Table 1 for illustration. 2.1 Marginal distributions We begin by examining the symmetrized Gamma class as a model for marginal distributions of wavelet coefficients. Figure 1 shows empirical histograms of a particular wavelet subband1 for four different natural images, along with the best fitting instance of the symmetrized Gamma distribution. Fitting was performed by minimizing the relative entropy (i.e., the Kullback-Leibler divergence, denoted t::.H) between empirical and theoretical histograms. In general, the fits are quite good: the fourth plot shows one of the worst fits in our data set. 2.2 Normalized components For a GSM random vector Y :1 zU, the normalized variable Yjz formed by component-wise division is Gaussian-distributed. In order to test this behavior empirically, we model a given wavelet coefficient Yo and a collection of neighbors {Yl, ... ,YN} as a GSM vector. For our examples, we use a neighborhood of N = 11 coefficients corresponding to basis functions at 4 adjacent positions, 5 orientations, and 2 scales. Although the multiplier z is unknown, we can estimate it by maximizing the log likelihood of the observed coefficients: z ~ arg maxz { log p(Y I z) }. Under reasonable conditions, the normalized quantity Yjz should converge in distribution to a Gaussian as the number of neighbors increases. The estimate z is simple to derive: z argmax {logp(Ylz)} z argmin {N log(z) + yT Q-1Yj2z 2} z lWe use the steer able pyramid, an overcomplete multiscale representation described in [13]. The marginal and joint statistics of other multiscale oriented representations are similar. 858 baboon b..H / H = 0.00035 f -a " -7 ~ ~-e boats 0.00041 M J. Wainwright and E. P. Simoncelli flowers frog ... -. f-e " -. ~ ~-8 -. 0.00042 0.00043 Figure 2. Marginal log histograms (solid lines) of the normalized coefficient ZI for a single subband of four natural images. Each shape is close to an inverted parabola, in agreement with Gaussians (dashed lines) of equivalent empirical variance. Below each plot is the relative entropy between the histogram (with 256 bins) and a variance-matched Gaussian, as a fraction of the total histogram entropy. where Q ~ lE [UUT] is the positive definite covariance matrix of the underlying Gaussian vector U. Given the estimate i, we then compute the normalized coefficient v ~ Yo/i. This is a generalization of the variance normalization proposed by Ruderman and Bialek[I], and the weighted sum of squares normalization procedure used by Simoncelli [5, 14]. Figure 2 shows the marginal histograms (in the log domain) of this normalized coefficient for four natural images, along with Gaussians of equal empirical variance. In contrast to histograms of the raw coefficients (shown in Figure 1), the histograms of normalized coefficients are nearly Gaussian. The GSM model makes a stronger prediction: that normalized quantities corresponding to nearby wavelet pairs should be jointly Gaussian. Specifically, a pair of normalized coefficients should be either correlated or uncorrelated Gaussians, depending on whether the underlying Gaussians U = [Ul U2]T are correlated or uncorrelated. We examine this prediction by collecting joint conditional histograms of normalized coefficients. The top row of Figure 3 shows joint conditional histograms for raw wavelet coefficients (taken from the same four natural images as Figure 2). The first two columns correspond to adjacent spatial scales; though decorrelated, they exhibit the familiar form of multiplicative scaling. The latter two columns correspond to adjacent orientations; in addition to being correlated, they also exhibit the multiplicative form of dependency. The bottom row shows the same joint conditional histograms, after the coefficients have been normalized. Whereas Figure 2 demonstrates that normalized coefficients are close to marginally Gaussian, Figure 3 demonstrates that they are also approximately jointly Gaussian. These observations support the use of a Gaussian scale mixture for modeling natural images. 2.3 Joint distributions The GSM model is a reasonable approximation for groups of nearby wavelet coefficients. However, the components of GSM vectors are highly dependent, whereas the dependency between wavelet coefficients decreases as (for example) their spatial separation increases. Consequently, the simple GSM model is inadequate for global modeling of coefficients. We are thus led to use a graphical model (such as tree) that specifies probabilistic relations between the multipliers. The wavelet coefficients themselves are considered observations, and are linked indirectly by their shared dependency on the (hidden) multipliers. Scale Mixtures of Gaussians and the Statistics of Natural Images 859 baboon boats flowers frog ~H / H = 0.0643 0.0743 0.0572 0.0836 Figure 3. Top row: joint conditional histograms of raw wavelet coefficients for four natural images. Bottom row: joint conditional histograms of normalized pairs of coefficients. Below each plot is the relative entropy between the joint histogram (with 256 X 256 bins) and a covariance-matched Gaussian, as a fraction of the total histogram entropy. For concreteness, we model the wavelet coefficient at node s as y(s) ~ IIx(s) II u(s), where x(s) is Gaussian, so that z ~ IIxll is the square root of a gamma variable of index 0.5. For illustration, we assume that the multipliers are linked by a multiscale autoregressive (MAR) process [15] on a tree: x(s) = I-' x(P(s)) + )1- 1-'2 w(s) where p( s) is the parent of node s. Two wavelet coefficients y (s) and y (t) are linked through the multiplier at their common ancestral node denoted s /\ t. In particular, the joint distributions are given by y(s) = Ill-'d(s,SAt) x(s /\ t) + VI (s)11 u(s) y(t) = Ill-'d(t,SAt) x(s /\ t) + V2(t)11 u(t) where VI, V2 are independent white noise processes; and d( , ) denotes the distance between a node and one of its ancestors on the tree (e.g., d(s,p(s)) = 1). For nodes sand t at the same scale and orientation but spatially separated by a distance of ~(s, t), the distance between s and the common ancestor s /\ t grows as d(s, s /\ t) '" [log2(~(s, t)) + 1]. The first row of Figure 4 shows the range of behaviors seen in joint distributions taken from a wavelet subband of a particular natural image, compared to simulated GSM gamma distributions with I-' = 0.92. The first column corresponds to a pair of wavelet filters in quadrature phase (Le., related by a Hilbert transform). Note that for this pair of coefficients, the contours are nearly circular, an observation that has been previously made by Zetzsche [4]. Nevertheless, these two coefficients are dependent, as shown by the multiplicative scaling in the conditional histogram of the third row. This type of scaling dependency has been extensively documented by Simoncelli [5, 9]. Analogous plots for the simulated Gamma model, with zero spatial separation are shown in rows 2 and 4. As in the image data, the contours of the joint density are very close to circular, and the conditional distribution shows a striking variance dependency. 860 image data simulated model image data simulated model quad. pair .. ~. • :' .. J . M J Wainwright and E. P Simoncelli overlapping near distant Figure 4. Examples of empirically observed distributions of wavelet coefficients, compared with simulated distributions from the GSM gamma model. First row: Empirical joint histograms for the "mountain" image, for four pairs of wavelet coefficients, corresponding to basis functions with spatial separations ~ = {O, 4, 8, 128}. Second row: Simulated joint distributions for Gamma variables with f.J = 0.92 and the same spatial separations. Contour lines are drawn at equal intervals of log probability. Third row: Empirical conditional histograms for the "mountain" image. Fourth row: Simulated conditional histograms for Gamma variables. For these conditional distributions, intensity corresponds to probability, except that each column has been independently rescaled to fill the full range of intensities. The remaining three columns of figure 4 show pairs of coefficients drawn from identical wavelet filters at spatial displacements ~ = {4, 8, 128}, corresponding to a pair of overlapping filters, a pair of nearby filters, and a distant pair. Note the progression in the contour shapes from off-circular, to a diamond shape, to a concave "star" shape. The model distributions behave similarly, and show the same range of contours for simulated pairs of coefficients. Thus, consistent with empirical observations, a GSM model can produce a range of dependency between pairs of wavelet coefficients. Again, the marginal histograms retain the same form throughout this range. 3 Conclusions We have proposed the class of Gaussian scale mixtures for modeling natural images. Models in this class typically exhibit heavy-tailed marginals, as well as multiplicative scaling between adjacent coefficients. We have demonstrated that a particular GSM (the symmetrized Gamma family) accounts well for both the marginal and joint distributions of wavelet coefficients from natural images. More importantly, this model suggests a hidden Markov structure for natural images, in which wavelet coefficients are linked by hidden multipliers. Romberg et al. [10] have made a related proposal using two-state discrete multipliers, corresponding to a finite mixture of Gaussians. Scale Mixtures ofGaussians and the Statistics of Natural Images 861 We have demonstrated that the hidden multipliers can be locally estimated from measurements of wavelet coefficients. Thus, by conditioning on fixed values of the multipliers, estimation problems may be reduced to the classical Gaussian case. Moreover, we described how to link the multipliers on a multiresolution tree, and showed that such a random cascade model accounts well for the drop-off in dependence of spatially separated coefficients. We are currently exploring EM-like algorithms for the problem of dual parameter and state estimation. Acknowledgements We thank Bill Freeman, David Mumford, Mike Schneider, Ilya Pollak, and Alan Willsky for helpful discussions. References [1] D. L. Ruderman and W. Bialek. Statistics of natural images: Scaling in the woods. Phys. Rev. Letters, 73(6):814-817, 1994. [2] D. J. Field. Relations between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Am. A, 4(12):2379-2394, 1987. [3] A. H. Tewfik and M. Kim. Correlation structure of the discrete wavelet coefficients of fractional Brownian motion. IEEE Trans. Info. Theory, 38:904-909, Mar. 1992. [4] C. Zetzsche, B. Wegmann, and E. Barth. Nonlinear aspects of primary vision: Entropy reduction beyond decorrelation. In Int 'l Symp. Soc. for Info. Display, volume 24, pages 933-936, 1993. [5] E. P. Simoncelli. Statistical models for images: Compression, restoration and synthesis. In 31st Asilomar Con/., pages 673-678, Nov. 1997. [6] S. G. Mallat. A theory for multiresolution signal decomposition: the wavelet representation. IEEE Pat. Anal. Mach. Intell., 11:674-693, July 1989. [7] E. P. Simoncelli and E. H. Adelson. Noise removal via Bayesian wavelet coring. In Proc. IEEE ICIP, volume I, pages 379-382, September 1996. [8] P. Moulin and J. Liu. Analysis of multiresolution image denoising schemes using a generalized Gaussian and complexity priors. IEEE Trans. Info. Theory, 45:909-919, Apr. 1999. [9] R W. Buccigrossi and E. P. Simoncelli. Image compression via joint statistical characterization in the wavelet domain. IEEE Trans. Image. Proc., 8(12):1688-1701, Dec. 1999. [10] J.K. Romberg, H. Choi, and RG. Baraniuk. Bayesian wavelet domain image modeling using hidden Markov trees. In Proc. IEEE ICIP, Kobe, Japan, Oct. 1999. [11] J. Huang and D. Mumford. Statistics of natural images and models. In CVPR, paper 216, 1999. [12] D.F. Andrews and C.L. Mallows. Scale mixtures of normal distributions. J. Royal Stat. Soc., 36:99- 102, 1974. [13] E. P. Simoncelli and W. T. Freeman. The steerable pyramid: A flexible architecture for multi-scale derivative computation. In Proc. IEEE ICIP, volume III, pages 444447, Oct. 1995. [14] E. P. Simoncelli and O. Schwartz. Image statistics and cortical normalization models. In M. S. Kearns, S. A. SoHa, and D. A. Cohn, editors, Adv. Neural Information Processing Systems, volume 11, pages 153-159, Cambridge, MA, May 1999. [15] K. Chou, A. Willsky, and R Nikoukhah. Multiscale systems, Kalman filters, and Riccati equations. IEEE Trans. Automatic Control, 39(3):479-492, Mar. 1994.
|
1999
|
55
|
1,704
|
Independent Factor Analysis with Temporally Structured Sources Hagai Attias hagai@gatsby.ucl.ac.uk Gatsby Unit, University College London 17 Queen Square London WCIN 3AR, U.K. Abstract We present a new technique for time series analysis based on dynamic probabilistic networks. In this approach, the observed data are modeled in terms of unobserved, mutually independent factors, as in the recently introduced technique of Independent Factor Analysis (IFA). However, unlike in IFA, the factors are not Li.d.; each factor has its own temporal statistical characteristics. We derive a family of EM algorithms that learn the structure of the underlying factors and their relation to the data. These algorithms perform source separation and noise reduction in an integrated manner, and demonstrate superior performance compared to IFA. 1 Introduction The technique of independent factor analysis (IFA) introduced in [1] provides a tool for modeling L'-dim data in terms of L unobserved factors. These factors are mutually independent and combine linearly with added noise to produce the observed data. Mathematically, the model is defined by Yt = HXt + Ut, (1) where Xt is the vector of factor activities at time t, Yt is the data vector, H is the L' x L mixing matrix, and Ut is the noise. The origins of IFA lie in applied statistics on the one hand and in signal processing on the other hand. Its statistics ancestor is ordinary factor analysis (FA), which assumes Gaussian factors. In contrast, IFA allows each factor to have its own arbitrary distribution, modeled semi-parametrically by a I-dim mixture of Gaussians (MOG). The MOG parameters, as well as the mixing matrix and noise covariance matrix, are learned from the observed data by an expectation-maximization (EM) algorithm derived in [1]. The signal processing ancestor of IFA is the independent component analysis (ICA) method for blind source separation [2]-[6]. In ICA, the factors are termed sources, and the task of blind source separation is to recover them from the observed data with no knowledge of the mixing process. The sources in ICA have non-Gaussian distributions, but unlike in IFA these distributions are usually fixed by prior knowledge or have quite limited adaptability. More significant restrictions Dynamic Independent Factor Analysis 387 are that their number is set to the data dimensionality, i.e. L = L' ('square mixing'), the mixing matrix is assumed invertible, and the data are assumed noise-free (Ut = 0). In contrast, IFA allows any L, L' (including more sources than sensors, L > L'), as well as non-zero noise with unknown covariance. In addition, its use of the flexible MOG model often proves crucial for achieving successful separation [1]. Therefore, IFA generalizes and unifies FA and ICA. Once the model has been learned, it can be used for classification (fitting an IFA model for each class), completing missing data, and so on. In the context of blind separation, an optimal reconstruction of the sources Xt from data is obtained [1] using a MAP estimator. However, IFA and its ancestors suffer from the following shortcoming: They are oblivious to temporal information since they do not attempt to model the temporal statistics of the data (but see [4] for square, noise-free mixing). In other words, the model learned would not be affected by permuting the time indices of {yt}. This is unfortunate since modeling the data as a time series would facilitate filtering and forecasting, as well as more accurate classification. Moreover, for source separation applications, learning temporal statistics would provide additional information on the sources, leading to cleaner source reconstructions. To see this, one may think of the problem of blind separation of noisy data in terms of two components: source separation and noise reduction. A possible approach might be the following two-stage procedure. First, perform noise reduction using, e.g., Wiener filtering. Second, perform source separation on the cleaned data using, e.g., an ICA algorithm. Notice that this procedure directly exploits temporal (second-order) statistics of the data in the first stage to achieve stronger noise reduction. An alternative approach would be to exploit the temporal structure of the data indirectly, by using a temporal source model. In the resulting single-stage algorithm, the opemtions of source sepamtion and noise reduction are coupled. This is the approach taken in the present paper. In the following, we present a new approach to the independent factor problem based on dynamic probabilistic networks. In order to capture temporal statistical properties of the observed data, we describe each source by a hidden Markov model (HMM). The resulting dynamic model describes a multivariate time series in terms of several independent sources, each having its own temporal characteristics. Section 2 presents an EM learning algorithm for the zero-noise case, and section 3 presents an algorithm for the case of isotropic noise. The case of non-isotropic noise turns out to be computationally intractable; section 4 provides an approximate EM algorithm based on a variational approach. Notation: The multivariable Gaussian density is denoted by g(z, E) =1 27rE 1-1/ 2 exp( -zT E- l z/2). We work with T-point time blocks denoted Xl:T = {Xt}[=I' The ith coordinate of Xt is x~. For a function f, (f(Xl:T)) denotes averaging over an ensemble of Xl:T blocks. 2 Zero Noise The MOG source model employed in IFA [1] has the advantages that (i) it is capable of approximating arbitrary densities, and (ii) it can be learned efficiently from data by EM. The Gaussians correspond to the hidden states of the sources, labeled by s. Assume that at time t, source i is in state s~ = s. Its signal x~ is then generated by sampling from a Gaussian distribution with mean JL! and variance v!. In order to capture temporal statistics of the data, we endow the sources with temporal structure by introducing a transition matrix a!,s between the states. Focusing on 388 H. Attias a time block t = 1, ... , T, the resulting probabilistic model is defined by ( iii ') i ( i ) i P St = S St-l = S = as's' P So = S = 7rs , p(X~ I S~ = S) = g(x~ - J.L!,v!), P(Yl:T) =1 detG IT P(Xl:T), (2) where P(Xl:T) is the joint density of all sources xL i = 1, ... , L at all time points, and the last equation follows from Xt = GYt with G = H- 1 being the unmixing matrix. As usual in the noise-free scenario (see [2]; section 7 of [1]), we are assuming that the mixing matrix is square and invertible. The graphical model for the observed density P(Yl:T I W) defined by (2) is parametrized by W = {Gij , J.L!, v!, 7r!, a!, s}' This model describes each source as a first-order HMM; it reduces to a time-independent model if a!,s = 7r!. Whereas temporal structure can be described by other means, e.g. a moving-average [4] or autoregressive [6] model, the HMM is advantageous since it models high-order temporal statistics and facilitates EM learning. Omitting the derivation, maximization with respect to Gij results in the incremental update rule 1 T bG = €G - €T L </>(Xt)x[G , t=l (3) where </>(xn = Es 'Y:(s)(x~ - J.L!)/v!, and the natural gradient [3] was used; € is an appropriately chosen learning rate. For the source parameters we obtain the update rules Et 'Yt(s)x~ Et 'Y1(s) , i _ Et ~t( s' , s) (4) as' s ~ i (')' ut 'Yt-l S with the initial probabilities updated via 7r! = 'YA(s). We used the standard HMM notation 'Y:(s) = p(s~ = S I xLT)' ~t(s',s) = P(SLI = s',s~ = s I xLT)' These posterior densities are computed in the E-step for each source, which is given in terms of the data via x~ = Ej Gijyl, using the forward-backward procedure [7]. The algorithm (3-4) may be used in several possible generalized EM schemes. An efficient one is given by the following two-phase procedure: (i) freeze the source parameters and learn the separating matrix G using (3); (ii) freeze G and learn the source parameters using (4), then go back to (i) and repeat. Notice that the rule (3) is similar to a natural gradient version of Bell and Sejnowski's leA rule [2]; in fact, the two coincide for time-independent sources where </>(Xi) = -alogp(xi)/axi. We also recognize (4) as the Baum-Welch method. Hence, in phase (i) our algorithm separates the sources using a generalized leA rule, whereas in phase (ii) it learns an HMM for each source. Remark. Often one would like to model a given L'-variable time series in terms of a smaller number L ~ L' of factors. In the framework of our noise-free model Yt = HXt, this can be achieved by applying the above algorithm to the L largest principal components of the data; notice that if the data were indeed generated by L factors, the remaining L' - L principal components would vanish. Equivalently, one may apply the algorithm to the data directly, using a non-square L x L' unmixing matrix G. Results. Figure 1 demonstrates the performance of the above method on a 4 x 4 mixture of speech signals, which were passed through a non-linear function to modify their distributions. This mixture is inseparable to leA because the source model used by the latter does not fit the actual source densities (see discussion in [1]). We also applied our dynamic network to a mixture of speech signals whose distributions Dynamic Independent Factor Analysis 0 .8 3 0 .7 2 0 .8 ):i"0.5 'zs:O.4 '>:! 0 0.3 -1 0 .2 -2 0.1 -3 0 -4 -2 4 -2 HMM-ICA o x1 3 -3 2 389 leA -2 0 2 x1 Figure 1: Left: Two of the four source distributions. Middle: Outputs of the EM algorithm (3-4) are nearly independent. Right: the outputs of leA (2) are correlated. were made Gaussian by an appropriate non-linear transformation. Since temporal information is crucial for separation in this case (see [4],[6]), this mixture is inseparable to leA and IFA; however, the algorithm (3-4) accomplished separation successfully. 3 Isotropic Noise We now turn to the case of non-zero noise Ut ::j:. O. We assume that the noise is white and has a zero-mean Gaussian distribution with covariance matrix A. In general, this case is computationally intractable (see section 4). The reason is that the Estep requires computing the posterior distribution P(SO:T, Xl:T I Yl:T) not only over the source states (as in the zero-noise case) but also over the source signals, and this posterior has a quite complicated structure. We now show that if we assume isotropic noise, i.e. Aij = )..6ij , as well as square invertible mixing as above, this posterior simplifies considerably, making learning and inference tractable. This is done by adapting an idea suggested in [8] to our dynamic probabilistic network. We start by pre-processing the data using a linear transformation that makes their covariance matrix unity, i.e., (YtyT) = I ('sphering'). Here (-) denotes averaging over T-point time blocks. From (1) it follows that HSHT = )..'1, where S = (XtxT) is the diagonal covariance matrix of the sources, and )..' = 1 -)... This, for a square invertible H, implies that HTH is diagonal. In fact, since the unobserved sources can be determined only to within a scaling factor, we can set the variance of each source to unity and obtain the orthogonality property HTH = )..'1. It can be shown that the source posterior now factorizes into a product over the individual sources, P(SO:T, Xl:T I Yl:T) = TIiP(sb:T, XLT I Yl:T), where P(Sb:T,xLT I Yl:T) ()( [rrg(X; -T):'aD · v;p(s: I SLl)] vbp(sb)· (5) t=l The means and variances at time t in (5), as well as the quantities vL depend on both the data Yt and the states s~; in particular, T); = (~j Hjiyl + )..j1!)/(>..'vs +)..) and a-; = )..v!/(>..'vs + )..), using s = s1; the expression for the v; are omitted. The transition probabilities are the same as in (2). Hence, the posterior distribution (5) effectively defines a new HMM for each source, with yrdependent emission and transition probabilities. To derive the learning rule for H, we should first compute the conditional mean Xt of the source signals at time t given the data. This can be done recursively using (5) as in the forward-backward procedure. We then obtain 1 T c= T~YtXr. (6) t=l 390 H. Attias This fractional form results from imposing the orthogonality constraint HTH = >..'1 using Lagrange multipliers and can be computed via a diagonalization procedure. The source parameters are computed using a learning rule (omitted) similar to the noise-free rule (4). It is easy to derive a learning rule for the noise level ,\ as well; in fact, the ordinary FA rule would suffice. We point out that, while this algorithm has been derived for the case L = L', it is perfectly well defined (though sub-optimal: see below) for L :::; L'. 4 Non-Isotropic Noise The general case of non-isotropic noise and non-square mixing is computationally intractable. This is because the exact E-step requires summing over all possible source configurations (st, ... , SfL) at all times tl, ... , tL = 1, ... , T. The intractability problem stems from the fact that, while the sources are independent, the sources conditioned on a data vector Yl:T are correlated, resulting in a large number of hidden configurations. This problem does not arise in the noise-free case, and can be avoided in the case of isotropic noise and square mixing using the orthogonality property; in both cases, the exact posterior over the sources factorizes. The EM algorithm derived below is based on a variational approach. This approach was introduced in [9J in the context of sigmoid belief networks, but constitutes a general framework for ML learning in intractable probabilistic networks; it was used in a HMM context in [IOJ. The idea is to use an approximate but tractable posterior to place a lower bound on the likelihood, and optimize the parameters by maximizing this bound. A starting point for deriving a bound on the likelihood L is Neal and Hinton's [l1J formulation of the EM algorithm: T L L = lOgp(Yl:T) ~ L Eq logp(Yt I Xt) + L Eq logp(sb:T' xi:T) - Eq logq, (7) t=l i=l where Eq denotes averaging with respect to an arbitrary posterior density over the hidden variables given the observed data, q = q(SO:T,Xl:T I Yl:T). Exact EM, as shown in [11], is obtained by maximizing the bound (7) with respect to both the posterior q (corresponding to the E-step) and the model parameters W (Mstep). However, the resulting q is the true but intractable posterior. In contrast, in variational EM we choose a q that differs from the true posterior, but facilitates a tractable E-step. E-Step. We use q(sO:T,Xl:T I Yl:T) = TIiq(sb:T I Yl:T)TItq(Xt I Yl:T), parametrized as q(s~ = s I SLI = S',Yl:T) ex: '\!,ta!,s, q(sb = s I Yl:T) ex: ,\! t7r! , , q(Xt IYl:T) = Q(Xt - Pt, ~t) . (8) Thus, the variational transition probabilities in (8) are described by multiplying the original ones a!, s by the parameters '\~,t' subject to the normalization constraints. The source signals Xt at time t are jointly Gaussian with mean Pt and covariance ~t. The means, covariances and transition probabilities are all time- and datadependent, i.e., Pt = f(Yl:T, t) etc. This parametrization scheme is motivated by the form of the posterior in (5); notice that the quantities 1]:, a-t, v~ t there become , the variational parameters pL ~;j,,\~ t of (8). A related scheme was used in [IOJ in a different context. Since these parameters will be adapted independently of the model parameters, the non-isotropic algorithm is expected to give superior results compared to the isotropic one. Dynamic Independent Factor Analysis 391 Mixing O ~--------~------~ Reco nstruc tion 5 -10 0 0 ~-20 ~ -5 0 ~ L.U -10 0 .L3Cfl> ___ ~O -40 - 15 -20 15 - 5 -5~S;----:0:------::5'-------:-::' 0:------:-'· 0 5 10 15 SNA (dB) SNR (dB) Figure 2: Left: quality of the model parameter estimates. Right: quality of the source reconstructions. (See text). Of course, in the true posterior the Xt are correlated, both temporally among themselves and with St, and the latter do not factorize. To best approximate it, the variational parameters V = {p~, ~~j , >..! t} are optimized to maximize the bound on .c, or equivalently to minimize the KL' distance between q and the true posterior. This requirement leads to the fixed point equations Pt (HT A -lH + Bt)-l(HT A -lYt + bt), ~t = (HT A-1H + Bt)-l , 1 [1 . (pi _ J-Li)2 + ~ii] --:- exp - - log V Z _ t s . t , zZ 2 s 2vZ t s (9) where Bij = Ls[rl(S)/v!]6ij , b~ = Ls ,l(s)J-L!/v!, and the factors zf ensure normalization. The HMM quantities ,f(s) are computed by the forward-backward procedure using the variational transition probabilities (8). The variational parameters are determined by solving eqs. (9) iteratively for each block Yl:T; in practice, we found that less then 20 iterations are usually required for convergence. M-Step. The update rules for W are given for the mixing parameters by and for the source parameters by Lt ,f(s)p~ Lt ,I(s) , Lt ~f(s', s) Lt,Ll(S') , 1 ~ T T T A = T L,)YtYt - YtPt H ), t Vi = Lt ,f(s)((p~ J-L~)2 + ~~i) s Lt ,f(s) (10) (11) where the ~Hs' , s) are computed using the variational transition probabilities (8). Notice that the learning rules for the source parameters have the Baum-Welch form, in spite of the correlations between the conditioned sources. In our variational approach, these correlations are hidden in V, as manifested by the fact that the fixed point equations (9) couple the parameters V across time points (since ,:(s) depends on >"!,t=l:T) and sources. Source Reconstruction. From q(Xt I Yl :T) (8), we observe that the MAP source estimate is given by Xt = Pt(Yl:T), and depends on both Wand V. Results. The above algorithm is demonstrated on a source separation task in Figure 2. We used 6 speech signals, transformed by non-linearities to have arbitrary one-point densities, and mixed by a random 8 x 6 matrix Ho. Different signalto-noise (SNR) levels were used. The error in the estimated H (left, solid line) is quantified by the size ofthe non-diagonal elements of (HTH)-l HTHo relative to the 392 H Attias diagonal; the results obtained by IFA [1], which does not use temporal information, are plotted for reference (dotted line). The mean squared error of the reconstructed sources (right, solid line) and the corresponding IFA result (right, dashed line) are also shown. The estimate and reconstruction errors of this algorithm are consistently smaller than those of IFA, reflecting the advantage of exploiting the temporal structure of the data. Additional experiments with different numbers of sources and sensors gave similar results. Notice that this algorithm, unlike the previous two, allows both L ::; L' and L > L'. We also considered situations where the number of sensors was smaller than the number of sources; the separation quality was good, although, as expected, less so than in the opposite case. 5 Conclusion An important issue that has not been addressed here is model selection. When applying our algorithms to an arbitrary dataset, the number of factors and of HMM states for each factor should be determined. Whereas this could be done, in principle, using cross-validation, the required computational effort would be fairly large. However, in a recent paper [12] we develop a new framework for Bayesian model selection, as well as model averaging, in probabilistic networks. This framework, termed Variational Bayes, proposes an EM-like algorithm which approximates full posterior distributions over not only hidden variables but also parameters and model structure, as well as predictive quantities, in an analytical manner. It is currently being applied to the algorithms presented here with good preliminary results. One field in which our approach may find important applications is speech technology, where it suggests building more economical signal models based on combining independent low-dimensional HMMs, rather than fitting a single complex HMM. It may also contribute toward improving recognition performance in noisy, multispeaker, reverberant conditions which characterize real-world auditory scenes. References [1] Attias, H. (1999). Independent factor analysis. Neur. Camp. 11, 803-85l. [2] Bell, A.J. & Sejnowski, T .J. (1995). An information-maximization approach to blind separation and blind deconvolution. Neur. Camp. 7, 1129-1159. [3] Amari, S., Cichocki, A. & Yang, H.H. (1996). A new learning algorithm for blind signal separation. Adv. Neur. Info. Pmc. Sys. 8,757-763 (Ed. by Touretzky, D.S. et al). MIT Press, Cambridge, MA. [4] Pearlmutter, B.A. & Parra, L.C. (1997). Maximum likelihood blind source separation: A context-sensitive generalization of ICA. Adv. Neur. Info. Pmc. Sys. 9, 613-619 (Ed. by Mozer, M.C. et al). MIT Press, Cambridge, MA. [5] Hyviirinen, A. & Oja, E. (1997). A fast fixed-point algorithm for independent component analysis. Neur. Camp. 9, 1483-1492. [6] Attias, H. & Schreiner, C.E. (1998). Blind source separation and deconvolution: the dynamic component analysis algorithm. Neur. Camp. 10, 1373-1424. [7] Rabiner, L. & Juang, B.-H. (1993). Fundamentals of Speech Recognition. Prentice Hall, Englewood Cliffs, NJ. [8] Lee, D.D. & Sompolinsky, H. (1999), unpublished; D.D. Lee, personal communication. [9] Saul, L.K., Jaakkola, T., and Jordan, M.L (1996). Mean field theory of sigmoid belief networks. J. Art. Int. Res. 4, 61-76. [10] Ghahramani, Z. & Jordan, M.L (1997). Factorial hidden Markov models. Mach. Learn. 29, 245-273. [11] Neal, R.M. & Hinton, G.E. (1998). A view of the EM algorithm that justifies incremental, sparse, and other variants. Learning in Graphical Models, 355-368 (Ed. by Jordan, M.L). Kluwer Academic Press. [12] Attias, H. (2000). A variational Bayesian framework for graphical models. Adv. Neur. Info. Pmc. Sys. 12 (Ed. by Leen, T. et al). MIT Press, Cambridge, MA.
|
1999
|
56
|
1,705
|
Managing Uncertainty in Cue Combination Zhiyong Yang Deparbnent of Neurobiology, Box 3209 Duke University Medical Center Durham, NC 27710 zhyyang@duke.edu Abstract Richard S. Zemel Deparbnent of Psychology University of Arizona Tucson, AZ 85721 zemel@u.arizona.edu We develop a hierarchical generative model to study cue combination. The model maps a global shape parameter to local cuespecific parameters, which in tum generate an intensity image. Inferring shape from images is achieved by inverting this model. Inference produces a probability distribution at each level; using distributions rather than a single value of underlying variables at each stage preserves information about the validity of each local cue for the given image. This allows the model, unlike standard combination models, to adaptively weight each cue based on general cue reliability and specific image context. We describe the results of a cue combination psychophysics experiment we conducted that allows a direct comparison with the model. The model provides a good fit to our data and a natural account for some interesting aspects of cue combination. Understanding cue combination is a fundamental step in developing computational models of visual perception, because many aspects of perception naturally involve multiple cues, such as binocular stereo, motion, texture, and shading. It is often formulated as a problem of inferring or estimating some relevant parameter, e.g., depth, shape, position, by combining estimates from individual cues. An important finding of psychophysical studies of cue combination is that cues vary in the degree to which they are used in different visual environments. Weights assigned to estimates derived from a particular cue seem to reflect its estimated reliability in the current scene and viewing conditions. For example, motion and stereo are weighted approximately equally at near distances, but motion is weighted more at far distances, presumably due to distance limits on binocular disparity.3 Experiments have also found these weightings sensitive to image manipulations; if a cue is weakened, such as by adding noise, then the uncontaminated cue is utilized more in making depth judgments.9 A recent study2 has shown that observers can adjust the weighting they assign to a cue based on its relative utility for a particular task. From these and other experiments, we can identify two types of information that determine relative cue weightings: (1) cue reliability: its relative utility in the context of the task and general viewing conditions; and (2) region informativeness: cue information available locally in a given image. A central question in computational models of cue combination then concerns how these forms of uncertainty can be combined. We propose a hierarchical generative 870 Z. Yang and R. S. Zemel model. Generative models have a rich history in cue combination, as thel underlie models of Bayesian perception that have been developed in this area.lO , The novelty in the generative model proposed here lies in its hierarchical nature and use of distributions throughout, which allows for both context-dependent and imagespecific uncertainty to be combined in a principled manner. Our aims in this paper are dual: to develop a combination model that incorporates cue reliability and region informativeness (estimated across and within images), and to use this model to account for data and provide predictions for psychophysical experiments. Another motivation for the approach here stems from our recent probabilistic framework,11 which posits that every step of processing entails the representation of an entire probability distribution, rather than just a single value of the relevant underlying variable(s). Here we use separate local probability distributions for each cue estimated directly from an image. Combination then entails transforming representations and integrating distributions across both space and cues, taking across- and within-image uncertainty into account. 1 IMAGE GENERATION In this paper we study the case of combining shading and texture. Standard shapefrom-shading models exclude texture, l, 8 while standard shape-from-texture models exclude shading.7 Experimental results and computational arguments have supported a strong interaction between these cues}O but no model accounting for this interaction has yet been worked out. The shape used in our experiments is a simple surface: Z = B(l - x2), Ixl <= 1, Iyl <= 1 (1) where Z is the height from the xy plane. B is the only shape parameter. Our image formation model is a hierarchical generative model (see Figure 1). The top layer contains the global parameter B. The second layer contains local shading and texture parameters S, T = {Sj, 11}, where i indexes image regions. The generation of local cues from a global parameter is intended to allow local uncertainties to be introduced separately into the cues. This models specific conditions in realistic images, such as shading uncertainty due to shadows or specularities, and texture uncertainty when prior assumptions such as isotropicity are violated.4 Here we introduce uncertainty by adding independent local noise to the underlying shape parameter; this manipulation is less realistic but easier to control. Global Shape (B) /~ Local Shading ({S}) Local Texture ({T}) ~~ Image (I) Figure 1: Left: The generative model of image formation. Right: Two sample images generated by the image formation procedure. B = 1.4 in both. Left: 0', = 0.05,O't = O. Right: 0', = O,O't = 0.05. The local cues are sampled from Gaussian distributions: p(SdB) = N(f(B); 0',); p(7iIB) = N(g(B); O't}. f(B),g(B) describe how the local cue parameters depend Managing Uncertainty in Cue Combination 871 on the shape parameter B, while 0"8 and O"t represent the degree of noise in each cue. In this paper, to simplify the generation process we set f(B) = g(B) = B. From {Si} and {Ti}, two surfaces are generated; these are essentially two separate noisy local versions of B. The intensity image combines these surfaces. A set of same-intensity texsels sampled from a uniform distribution are mapped onto the texture surface, and then projected onto the image plane under orthogonal projection. The intensity of surface pixels not contained within these texsels are determined generated from the shading surface using Lambertian shading. Each image is composed of 10 x 10 non-overlapping regions, and contains 400 x 400 pixels. Figure 1 shows two images generated by this procedure. 2 COMBINATION MODEL We create a combination, or recognition model by inverting the generative model of Figure 1 to infer the shape parameter B from the image. An important aspect of the combination model is the use of distributions to represent parameter estimates at each stage. This preserves uncertainty information at each level, and allows it to playa role in subsequent inference. The overall goal of combination is to infer an estimate of B given some image I. We derive our main inference equation using a Bayesian integration over distributions: P(BIl) = J P(BIS, T)P(S, TIl)dSdT (2) P(S, TIl) '" IT P(Sdl)P(TiII) (3) P(BIS, T) ;(B)P(S, TIB)/ J P(B)P(S, TIB)db '" n P(SdB)P(TiIB) (4) • To simplify the two components we have assumed that the prior over B is uniform, and that the S, T are conditionally independent given B, and given the image. This third assumption is dubious but is not essential in the model, as discussed below. We now consider these two components in tum. 2.1 Obtaining local cue-specific representations from an image One component in the inference equation, P(S, TIl), describes local cuedependent information in the particular image I. We first define intermediate representations S, T that are dependent on shading and texture cues, respectively. The shading representation is the curvature of a horizontal section: S = f(B) = 2B(1 + 4x2 B2)-3/2. The texture representation is the cosine of the surface slant: T = g(B) = (1 + 4x2 B2)-1/2. Note that these S, T variables do not match those used in the generative model; ideally we could have used these cue-dependent variables, but generating images from them proved difficult. Some image pre-processing must take place in order to estimate values and uncertainties for these particular local variables. The approach we adopt involves a simple statistical matching procedure, similar to k-nearest neighbors, applied to local image patches. After applying Gaussian smoothing and band-pass filtering to the image, two representations of each patch are obtained using separate shading and texture filters. For shading, image patches are represented by forming a histogram of ~1; for texture, the patch is represented by the mean and standard deviation of the amplitude of Gabor filter responses at 4 scales and orientations. This representation of a shading patch is then compared to a database of similar 872 Z. Yang and R. S. Zemel patch representations. Entries in the shading database are formed by first selecting a particular value of B and (j3' generating an image patch, and applying the appropriate filters. Thus S = f (B) and the noise level (j 3 are known for each entry, allowing an estimate of these variables for the new patch to be formed as a linear combination of the entries with similar representations. An analogous procedure, utilizing a separate database, allows T and an uncertainty estimate to be derived for texture. Both databases have 60 different h, (j pairs, and 10 samples of each pair. Based on this procedure we obtain for each image patch mean values Mt, Ml and uncertainty values Vi3 , Vit for Si, Tt. These determine P(IIS), P(IIT), which are approximated as Gaussians. Taking into account the Gaussian priors for Si, Tt, P(Sil!) P(TtI!) V,3 V,3 P(IISi)P(Si) '" exp(-t(S - Mt)2)exp(-t(S M~)2) (5) W v,t P(IITt)P(Tt) '" exp( -f(T - Ml)2) exp( -f(T M~)2) (6) Note that the independence assumption of Equation 3 is not necessary, as the matching procedure could use a single database indexed by both the shading and texture representations of a patch. 2.2 Transforming and combining cue-specific local representations The other component of the inference equation describes the relationship between the intermediate, cue-specific representations S, T and the shape parameter B: V;3 v;t P(SIB) '" exp(--t(S - f(B))2) ; P(TIB) '" exp(-t(T - g(B))2) (7) The two parameters Vb3, V: in this equation describe the uncertainty in the relationship between the intermediate parameters S, T and B; they are invariant across space. These two, along with the parameters of the priors-M~, M~, V~, Vt-are the free parameters of this model. Note that this combination model neatly accounts for both types of cue validity we identified: the variance in P(SIB) describes the general uncertainty of a given cue, while the local variance in P(SI!) describes the image-specific uncertainty of the cue. Combining Equations 3-7, and completing the integral in Equation 2, we have: P( BII) - exp [ -~ ~ .tI( B)' + .,g(B)' - 2.,f(B) - 2 •• g( B) 1 (8) Thus our model infers from any image a mean U and variance E2 for B as nonlinear combinations of the cue estimates, taking into account the various forms of uncertainty. 3 A CUE COMBINATION PSYCHOPHYSICS EXPERIMENT We have conducted psychophysical experiments using stimuli generated by the procedure described above. In each experimental trial, a stimulus image and four Managing Uncertainty in Cue Combination 873 views of a mesh surface are displayed side-by-side on a computer screen. The subject's task is to manipulate the curvature of the mesh to match the stimulus. The final shape of the mesh surface describes the subject's estimate of the shape parameter B on that trial. The subject's variance is computed across repeated trials with an identical stimulus. In a given block of trials, the stimulus may contain only shading information (no texture elements), only texture information uniform shading), or both. The local cue noise «(i$' (it) is zero in some blocks, non-zero in others. The primary experimental findings (see Figure 2) are: • Shape from shading alone produces underestimates of B. Shape from texture alone also leads to underestimation, but to a lesser degree. • Shape from both cues leads to almost perfect estimation, with smaller variance than shape from either cue alone. Thus cue enhancement-more accurate and robust judgements for stimuli containing multiple cues than just individual cues-applies to this paradigm. • The variance of a subject's estimation increases with B. • Noise in either shading or texture systematically biases the estimation from the true values: the greater the noise level, the greater the bias. • Shape from both cues is more robust against noise than shape from either cue alone, providing evidence of another form of cue enhancement. 2 2 2 r' g 1.5 c: 1.5 t ~ 1.5 0 "" '" ~ .. .. ( E E .~ -/.,.. ';i <II 1 /11 7 W 1 w w ~.-r 1,r --l.-I.-t 1 2 1 2 1 2 Stimulus Stimulus Stimulus 1.8 "~I' , ,1 1.8 1.6 1.6 ~ "r' ~ ! 1.4 ! 1.4 ,. ,r w w 1.2 ,l' 1.2 , 0.8 1.5 1.5 Slimulus Stimulus Figure 2: Means and standard errors are shown for the shape matching experiment, for different values of B, under different stimulus conditions. rop: No noise in local shape parameters. Left: Shape from shading alone. Middle: Shape from texture alone. Right: Shape from shading and texture. BOTIOM: Shape from shading and texture. Left: (i$ = 0.05, (it = O. Right: (i$ = a, (it = 0.05. 4 MODELING RESULTS The model was ~rained using a subset of data from these experiments. The error criteria was mean relative error (M RE) between the model outputs (U, E) and 874 Z. Yang and R. S. Zemel B O"s O"t data (U/E) model (U /E) 1.4 0.10 0 1.18/0.072 1.20/0.06 1.6 0.10 0 1.34/0.075 1.35/0.063 1.4 0.05 0 1.32/0.042 1.4/0.067 1.6 0.05 0 1.52/0.049 1.46/0.069 1.2 0 0.05 1.20/0.052 1.14/0.056 1.4 0 0.05 1.36/0.062 1.30/0.063 Table 1: Data versus model predictions on images outside the training class. The first column of means and variances are from the experimental data, the second column from the model. experimental data (subject mean and variance on the same image). The six free parameters of the model were described as the sum of third order polynomials of local S, T and the noise levels. Gradient descent was used to train the model. The model was trained and tested on three different subsets of the experimental data. When trained on data in which only B varied, the model output accurately predicts unseen experimental data of the same type. When the data varied in B and O"s or O"t, the model outputs agree very well with subject data (M RE ,...., 5 8%). When trained on data where all three variables vary, the model fits the data reasonably well (M RE ,...., 8 -13%). For a model of the first type, Figure 3 compares model predictions to data from within the same set, while Table 1 shows model outputs and subject responses for test examples from outside the training class. 1.6r---------:"""""""' 1.5 g1.4 ~ .. ~ 1.3 Itl 1.2 1.1 1.2 1.3 1.4 1.5 1.6 Stimulus Figure 3: Model performance on data in which O"s = O,O"t = 0.10. Upper line: perfect estimation. Lower line: experimental data. Dashed line: model prediction. The model accounts for some important aspects of cue combination. Trained model parameters reveal that the texture prior is considerably weaker than the shading prior, and texture has a more reliable relationship with B. Consequently, at equal noise levels texture outweighs shading in the combination model. These factors account for the degree of underestimation found in each single-cue experiment, and the greater accuracy (i.e., enhancement) with combined-cues. Our studies also reveal a novel form of cue interaction: for some image patches, esp. at high curvature and noise levels, shading information becomes hannful, i.e., curvature estimation becomes less reliable when shading information is taken into account. Note that this differs from cue veto, in that texture does not veto shading. Finally, the primary contribution of our model lies in its ability to predict the effect of continuous within-image variation in cue reliability on combination. Figure 4 shows how the estimation becomes more accurate and less variable with increasManaging Uncertainty in Cue Combination 875 ing certainty in shading infonnation. Standard cue combination models cannot produce similar behavior, as they do not estimate within-image cue reliabilities. 0.9 0.1 (I) 0.85 0.09 C? '" ;:-C~U 0 :; 'D E 0.8 ,- _ _ _ _ _ _ _ _ 8=1.6 80.08 '" m .po c c: '" '~t1:1 .S! .c: ~ 0.75 ~8=1.8 ~ 0.07 1ii w 0.7 0.06 0 8=1.4 0 0.65 0.05 0 0.5 1.5 0 0.5 1.5 Figure 4: Mean (left) and variance (right) of model output as a function of "is, for different values of B. Here Us = 0.15, Ut = 0, all model parameters held constant. 5 CONCLUSION We have proposed a hierarchical generative model to study cue combination. Inferring parameters from images is achieved by inverting this modeL Inference produces probability distributions at each level: a set of local distributions, separately representing each cue, are combined to fonn a distribution over a relevant scene variable. The model naturally handles variations in cue reliability, which depend both on spatially local image context and general cue characteristics. This fonn of representation, incorporating image-specific cue utilities, makes this model more powerful than standard combination models. The model provides a good fit to our psychophysics results on shading and texture combination and an account for several aspects of cue combination; it also provides predictions for hGW varying noise levels, both within and across images, will effect combination. We are extending this work in a number of directions. We are conducting experiments to obtain local shape estimates from subjects. We are conSidering better ways to extract local representations and distributions over them directly from an image, and methods of handling natural outliers such as shadows and occlusion. References [1] Hom, B. K. P. (1977). Understanding image intensities. AI 8, 201-231. [2] Jacobs, R. A. & Fine I. (1999). Experience-dependent integration of texture and motion cues to depth. Vis. Res., 39, 4062-4075. [3] Johnston, E. B., Cumming, B. G., & Landy, M. S. (1994). Integration of depth modules: Stereopsis and texture. Vis. Res. 34, 2259-2275. [4] Knill, D. C. (1998). Surface orientation from texture: ideal observers, generic observers and the information content of texture cues. Vis. Res. 38, 1655-1682. [5] Knill, D. c., Kersten, D., & Mamassian P. (1996). Implications of a Bayesian formulation of visual information for processing for psychophysiCS. In Perception as Bayesian Inference, D. C. Knill and W. Richards (Eds.), 239-286, Cambridge Univ Press. [6] Landy, M. S., Maloney, L. T., Johnston, E. B., & Young, M. J. (1995). Measurement and modeling of depth cue combination: In defense of weak fusion. Vis. Res. 35,389-412. [7] Malik, J. & Rosenholtz, R. (1997). Computing local surface orientation and shape from texture for curved surfaces. I/CV 23, 149-168. [8] Pentland, A. (1984). Local shading analysis. IEEE PAM!, 6, 170-187. [9] Young, M.J., Landy, MS., & Maloney, L.T. (1993). A perturbation analysis of depth perception from combinations of texture and motion cues. Vis. Res. 33, 2685-2696. [10] Yuille, A. & Bulthoff, H. H. (1996). Bayesian decision theory and psychophysiCS. In Perception as Bayesian Inference, D. C. Knill and W. Richards (Eds.), 123-16}, Cambridge Univ Press. [11] Zemel, R. S., Dayan, P., & Pouget, A. (1998). Probabilistic interpretation of population codes. Neural Computation, 403-430. PART VIII ApPLICATIONS
|
1999
|
57
|
1,706
|
Potential Boosters ? Nigel Duffy Department of Computer Science University of California Santa Cruz, CA 95064 nigedufJ@cse. ucsc. edu David Helmbold Department of Computer Science University of California Santa Cruz, CA 95064 dph@~se . ucsc. edu Abstract Recent interpretations of the Adaboost algorithm view it as performing a gradient descent on a potential function. Simply changing the potential function allows one to create new algorithms related to AdaBoost. However, these new algorithms are generally not known to have the formal boosting property. This paper examines the question of which potential functions lead to new algorithms that are boosters. The two main results are general sets of conditions on the potential; one set implies that the resulting algorithm is a booster, while the other implies that the algorithm is not. These conditions are applied to previously studied potential functions, such as those used by LogitBoost and Doom II. 1 Introduction The first boosting algorithm appeared in Rob Schapire's thesis [1]. This algorithm was able to boost the performance of a weak PAC learner [2] so that the resulting algorithm satisfies the strong PAC learning [3] criteria. We will call any method that builds a strong PAC learning algorithm from a weak PAC learning algorithm a PAC boosting algorithm. Freund and Schapire later found an improved PAC boosting algorithm called AdaBoost [4], which also tends to improve the hypotheses generated by practical learning algorithms [5]. The AdaBoost algorithm takes a labeled training set and produces a master hypothesis by repeatedly calling a given learning method. The given learning method is used with different distributions on the training set to produce different base hypotheses. The master hypothesis returned by AdaBoost is a weighted vote of these base hypotheses. AdaBoost works iteratively, determining which examples are poorly classified by the current weighted vote and selecting a distribution on the training set to emphasize those examples. Recently, several researchers [6, 7, 8, 9, 10] have noticed that Adaboost is performing a constrained gradient descent on an exponential potential function of the margins of the examples. The margin of an example is yF(x) where y is the ±1 valued label of the example x and F(x) E lR is the net weighted vote of master hypothesis F. Once Adaboost is seen this way it is clear that further algorithms may be derived by changing the potential function [6, 7, 9, 10]. Potential Boosters? 259 The exponential potential used by AdaBoost has the property that the influence of a data point increases exponentially if it is repeatedly misclassified by the base hypotheses. This concentration on the "hard" examples allows AdaBoost to rapidly obtain a consistent hypothesis (assuming that the base hypotheses have certain properties). However, it also means that an incorrectly labeled or noisy example can quickly attract much of the distribution. It appears that this lack of noisetolerance is one of AdaBoost's few drawbacks [11]. Several researchers [7, 8, 9, 10] have proposed potential functions which do not concentrate as much on these "hard" examples. However, they generally do not show that the derived algorithms have the PAC boosting property. In this paper we return to the original motivation behind boosting algorithms and ask: "for which potential functions does gradient descent lead to PAC boosting algorithms" (i.e. boosters that create strong PAC learning algorithms from arbitrary weak PAC learners). We give necessary conditions that are met by some of the proposed potential functions (most notably the LogitBoost potential introduced by Friedman et al. [7]). Furthermore, we show that simple gradient descent on other proposed potential functions (such as the sigmoidal potential used by Mason et al. [10]) cannot convert arbitrary weak PAC learning algorithms into strong PAC learners. The aim of this work is to identify properties of potential functions required for PAC boosting, in order to guide the search for more effective potentials. Some potential functions have an additional tunable parameter [10] or change over time [12]. Our results do not yet apply to such dynamic potentials. 2 PAC Boosting Here we define the notions of PAC learningl and boosting, and define the notation used throughout the paper. A concept C is a subset of the learning domain X. A random example of C is a pair (x E X,y E {-1, +1}) where x is drawn from some distribution on X and y = 1 if x E C and -1 otherwise. A concept class is a set of concepts. Definition 1 A (strong) PAC learner for concept class C has the property that for every distribution D on X, all concepts C E C, and all 0 < E,O < 1/2: with probability at least 1 - 0 the algorithm outputs a hypothesis h where P D [h( x) :j:. C (x)] ~ E. The learning algorithm is given C, E, 0, and the ability to draw random examples of C (w.r.t. distribution D), and must run in time bounded by poly(l/E,l/o). Definition 2 A weak PAC learner is similar to a strong PA C learner, except that it need only satisfy the conditions for a particular 0 < EO, 00 < 1/2 pair, rather than for all E,O pairs. Definition 3 A PAC boosting algorithm is a generic algorithm which can leverage any weak PAC learner to meet the strong PAC learning criteria. In the remainder of the paper we emphasize boosting the accuracy E as it is much easier to boost the confidence 0, see Haussler et al. [13] and Freund [14] for details. Furthermore, we emphasize boosting by re-sampling, where the strong PAC learner draws a large sample, and each iteration the weak learning algorithm is called with some distribution over this sample. lTo simplify the presentation we omit the instance space dimension and target representation length parameters. 260 N. Duffy and D. Helmbold Throughout the paper we use the following notation. • m is the cardinality of the fixed sample {(Xl, Y1), ... , (Xm, Ym) }. • ht(x) is the ±1 valued weak hypothesis created at iteration t. • at is the weight or vote of ht in the master hypothesis, the a's mayor may not be normalized so that 2::, =1 at' = l. • Ft (x) = 2::'=1 (at' ht, (x) /2:;=1 aT) E !R, is the master hypothesis2 at iteration t. • Ui,t = Yi 2::'=1 at,ht, (X) is the margin of Xi after iteration t; the t subscript is often omitted. Note that the margin is positive when the master hypothesis is correct, and the normalized margin is Ui,t/ 2::'=1 at'· • p(u) is the potential of an instance with margin u, and the total potential is 2:~1 p(Ui). • P v[ ],P s[ ], and Es[ ] are the probability with respect to the unknown distribution over the domain, and the probability and expectations with respect to the uniform distribution over the sample, respectively. Our results apply to total potential functions of the form 2:~1 p(Ui) where p is positive and strictly decreasing. 3 Leveraging Learners by Gradient Descent AdaBoost [4] has recently been interpreted as gradient descent independently by several groups [6, 7, 8, 9, 1m. Under this interpretation AdaBoost is seen as minimizing the total potential 2:i=l P(Ui) = 2:~1 exp( -Ui) via feasible direction gradient descent. On each iteration t + 1, AdaBoost chooses the direction of steepest descent as the distribution on the sample, and calls the weak learner to obtain a new base hypothesis hHl . The weight at+! of this new weak hypothesis is calculated to minimize3 the resulting potential 2:~1 p(Ui,H1) = 2:~1 exp( -(Ui,t + aHIYiht+! (Xi))). This gradient descent idea has been generalized to other potential functions [6, 7, 10]. Duffy et al. [9] prove bounds for a similar gradient descent technique using a non-componentwise, non-monotonic potential function. Note that if the weak learner returns a good hypothesis ht (with training error at most € < 1/2), then 2:~1 Dt(Xi)Yiht(Xi) > 1 - 2€ > O. We set T = 1 - 2€, and assume that each base hypothesis produced satisfies 2:~1 Dt(Xi)Yiht(Xi) ~ T. In this paper we consider this general gradient descent approach applied to various potentials 2:~1 P(Ui). Note that each potential function P has two corresponding gradient descent algorithms (see [6]). The un-normalized algorithms (like AdaBoost) continually add in new weak hypotheses while preserving the old a's. The normalized algorithms re-scale the a's so that they always sum to 1. In general, we call such algorithms "leveraging algorithms", reserving the term "boosting" for those that actually have the PAC boosting property. 4 Potentials that Don't Boost In this section we describe sufficient conditions on potential functions so that the corresponding leveraging algorithm does not have the PAC boosting property. We 2The prediction of the master hypothesis on instance x is the sign of Ft(x). 30ur current proofs require that the actual at's be no greater than a constant (say 1). Therefore, this minimizing a may need to be reduced. Potential Boosters? 261 apply these conditions to show that two potentials from the literature do not lead to boosting algorithms. Theorem 1 Let p( u) be a potential function for which: 1} the derivative, p' (u), is increasing (_p' (u) decreasing} in ?R+, and 2} 3{3 > ° such that for all u > 0, -{3p'(u) ~ -p'(-2u). Then neither the normalized nor the un-normalized leveraging algorithms corresponding to potential p have the PAC boosting property. This theorem is proven by an adversary argument. Whenever the concept class is sufficiently rich4 , the adversary can keep a constant fraction of the sample from being correctly labeled by the master hypothesis. Thus as the error tolerance € goes to zero, the master hypotheses will not be sufficiently accurate. We now apply this theorem to two potential functions from the literature. Friedman et al. [7J describe a potential they call "Squared Error(p)" where the ( yo + 1 eF(Xi)) 2 potential at Xi is T - eF(Xi) + e-F(Xi) . This potential can be re-written 1 ( e-Ui _ eUi (e-Ui _ eUi )2) as PSE(Ui) = 4" 1 + 2 eUi + e-Ui + eUi + e-Ui • Corollary 1 Potential "Squared Error{p} " does not lead to a boosting algorithm. Proof: This potential satisfies the conditions of Theorem 1. It is strictly decreasing, and the second condition holds for {3 = 2. Mason et al. [lOJ examine a normalized algorithm using the potential PD(U) = 1- tanh (AU). Their algorithm optimizes over choices of A via cross-validation, and uses weak learners with slightly different properties. However, we can plug this potential directly into the gradient descent framework and examine the resulting algorithms. Corollary 2 The DOOMII potential PD does not lead to a boosting algorithm for any fixed A. Proof: The potential is strictly decreasing, and the second condition of Theorem 1 holds for {3 = 1. Our techniques show that potentials that are sigmoidal in nature do not lead to algorithms with the PAC boosting property. Since sigmoidal potentials are generally better over-estimates of the 0, 1 loss than the potential used by AdaBoost, our results imply that boosting algorithms must use a potential with more subtle properties than simply upper bounding the 0, 1 loss. 5 Potential Functions That Boost In this section we give sufficient conditions on a potential function for it's corresponding un-normalized algorithm to have the PAC boosting property. This result implies that AdaBoost [4J and LogitBoost [7J have the PAC boosting property (Although this was previously known for AdaBoost [4J, we believe this is a new result for LogitBoost). 4The VC-dimension 4 concept class consisting of pairs of intervals on the real line is sufficient for our adversary. 262 N. Duffy and D. Helmbold One set of conditions on the potential imply that it decreases roughly exponentially when the (un-normalized) margins are large. Once the margins are in this exponential region, ideas similar to those used in AdaBoost's analysis show that the minimum normalized margin quickly becomes bounded away from zero. This allows us to bound the generalization error using a theorem from Bartlett et al. [15]. A second set of conditions governs the behavior of the potential function before the un-normalized margins are large enough. These conditions imply that the total potential decreases by a constant factor each iteration. Therefore, too much time will not be spent before all the margins enter the exponential region. The margin value bounding the exponential region is U, and once 2::=1 p(Ui) ~ p(U), all margins p(Ui) will remain in the exponential region. The following theorem gives conditions on p ensuring that 2::=1 P(Ui) quickly becomes less than p(U). Theorem 2 If the following conditions hold for p( u) and U: 1. -p'(u) is strictly decreasing -and 0 < pll(U) ~ B, and 2. 3q> 0 such that p(u) ~ -qp'(u) Vu > U, m 4Bq2m 2p(0) In ( ~rb))) then 2:i=l P(Ui) ~ p(U) after Tl ~ p(U)2r2 iterations. The proof of this theorem approximates the new total potential by the old potential minus a times a linear term, plus an error. By bounding the error as a function of a and minimizing we demonstrate that some values of a give a sufficient decrease in the total potential. Theorem 3 If the following conditions hold for p( u), U, q, and iteration Tl: 1. 3(3 ~ .J3 such that -p'(u + v) ~ p(u + v) ~ -p'(u)(3-Vq whenever -1 ~ v ~ 1 and u > U, 2. 2::1P(Ui,Tl) ~ p(U), 3. -p' (u) is strictly decreasing, and 4. 3C > 0, 'Y> 1 such that Cp(u) ~ 'Y- u Vu > U which decreases expoThe proof of this theorem is a generalization of the AdaBoost proof. Combining these two theorems, and the generalization bound from Theorem 2 of Bartlett et al. [15] gives the following result, where d is the VC dimension of the weak hypothesis class. Theorem 4 If for all edges 0 < r < 1/2 there exists T1,r ~ poly(m,l/r), Ur , and qr satisfying the conditions of Theorem 3 such that p(Ur) ~ poly(r) and qrv'f'=T2 = l(r) < 1 - poly(r), then in time poly(m, 1/r) all examples have norPotential Boosters? 263 malized margin at least () = In ((l~~~l)) / In(r) and ( 1 (ln2(r)dlog2 (m/d) )~) P D[yFT(X) ~ 0] E 0 Vm (In (l(r) + 1) -In (21(r)))2 + 10g(1/8) Choosing m appropriately makes the error rate sufficiently small so that the algorithm corresponding to p has the PAC boosting property. We now apply Theorem 4 to show that the AdaBoost and LogitBoost potentials lead to boosting algorithms. 6 Some Boosting Potentials In this section we show as a direct consequence of our Theorem 4 that the potential functions for AdaBoost and LogitBoost lead to boosting algorithms. Note that the LogitBoost algorithm we analyze is not exactly the same as that described by Friedman et al. [7], their "weak learner" optimizes a square loss which appears to better fit the potential. First we re-derive the boosting property for AdaBoost. Corollary 3 AdaBoost's [16] potential boosts. Proof: To prove this we simply need to show that the potential p(u) = exp( -u) satisfies the conditions of Theorem 4. This is done by setting Ur = -In(m), qr = 1, 'Y = f3 = e, C = 1, and Tl = o. Corollary 4 The log-likelihood potential (as used in LogitBoost [7]) boosts. Proof: In this case p(u) = In (1 + e- U ) and -p'(u) = l!~:u. We set 'Y = f3 = e, ( Jl- f2/2 ) {l?1Z C = 2, Ur = -In ~ -1 and qr = 1 +exp(-Ur ) = ~. Now Theorem 2 shows that after Tl ~ poly(m, l/r) iterations the conditions of Theorem 4 are satisfied. 7 Conclusions In this paper we have examined leveraging weak learners using a gradient descent approach [9] . This approach is a direct generalization of the Adaboost [4, 16] algorithm, where Adaboost's exponential potential function is replaced by alternative potentials. We demonstrated properties of potentials that are sufficient to show that the resulting algorithms are PAC boosters, and other properties that imply that the resulting algorithms are not PAC boosters. We applied these results to several potential functions from the literature [7, 10, 16]. New insight can be gained from examining our criteria carefully. The conditions that show boosting leave tremendous freedom in the choice of potential function for values less than some U, perhaps this freedom can be used to choose potential functions which do not overly concentrate on noisy examples. There is still a significant gap between these two sets of properties, we are still a long way from classifying arbitrary potential functions as to their boosting properties. There are other classes of leveraging algorithms. One class looks at the distances between successive distributions [17, 18]. Another class changes their potential 264 N. Duffy and D. Helmbold over time [6, 8, 12, 14]. The criteria for boosting may change significantly with these different approaches. For example, Freund recently presented a boosting algorithm [12] that uses a time-varying sigmoidal potential. It would be interesting to adapt our techniques to such dynamic potentials. References [1] Robert E. Schapire. The Design and Analysis of Efficient Learning Algorithms. MIT Press, 1992. [2] Michael Kearns and Leslie Valiant. Cryptographic limitations on learning Boolean formulae and finite automata. Journal of the ACM, 41(1):67-95, January 1994. [3] L. G. Valiant. A theory of the learnable. Commun. ACM, 27(11):1134-1142, November 1984. [4] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119-139, August 1997. [5] Eric Bauer and Ron Kohavi. An empirical comparison of voting classification algorithms: Bagging, boosting and variants. Machine Learning, 36(1-2):105-39, 1999. [6] Leo Breiman. Arcing the edge. Technical Report 486, Department of Statistics, University of California, Berkeley., 1997. available at www.stat.berkeley.edu. [7] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Additive logistic regression: a statistical view of boosting. Technical report, Stanford University, 1998. [8] G. lliitsch, T. Onoda, and K-R. Muller. Soft margins for AdaBoost. Machine Learning, 2000. To appear. [9] Nigel Duffy and David P. Helmbold. A geometric approach to leveraging weak learners. In Paul Fischer and Hans Ulrich Simon, editors, Computational Learning Theory: 4th European Conference (EuroCOLT '99), pages 18-33. Springer-Verlag, March 1999. [10] Llew Mason, Jonathan Baxter, Peter Bartlett, and Marcus Frean. Boosting algorithms as gradient descent. To appear in NIPS 2000. [11] Thomas G. Dietterich. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, Boosting, and Randomization. Machine Learning. To appear. [12] Yoav Freund. An adaptive version of the boost-by-majority algorithm. In Proc. l£th Annu. Conf. on Comput. Learning Theory, pages 102-113. ACM, 1999. [13] David Haussler, Michael Kearns, Nick Littlestone, and Manfred K Warmuth. Equivalence of models for polynomiallearnability. Information and Computation, 95(2):129161, December 1991. [14] Y. Freund. Boosting a weak learning algorithm by majority. Information and Computation, 121(2):256-285, September 1995. [15] Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics, 26(5):1651-1686, 1998. [16] Robert E. Schapire and Yoram Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37(3):297-336, December 1999. [17] Jyrki Kivinen and Manfred K Warmuth. Boosting as entropy projection. In Proc. l£th Annu. Conf. on Comput. Learning Theory, pages 134-144. ACM, 1999. [18] John Lafferty. Additive models, boosting, and inference for generalized divergences. In Proc. l£th Annu. Conf. on Comput. Learning Theory, pages 125-133. ACM.
|
1999
|
58
|
1,707
|
Resonance in a Stochastic Neuron Model with Delayed Interaction Toru Ohira* Sony Computer Science Laboratory 3-14-13 Higashi-gotanda Shinagawa, Tokyo 141, Japan ohira@csl.sony.co.jp Yuzuru Sato Institute of Physics, Graduate School of Arts and Science, University of Tokyo 3-8-1 Komaba, Meguro, Tokyo 153 Japan ysato@sacral.c.u-tokyo.ac.jp Jack D. Cowan Department of Mathematics University of Chicago 5734 S. University, Chicago, IL 60637, U.S.A cowan@math.uchicago.edu Abstract We study here a simple stochastic single neuron model with delayed self-feedback capable of generating spike trains. Simulations show that its spike trains exhibit resonant behavior between "noise" and "delay". In order to gain insight into this resonance, we simplify the model and study a stochastic binary element whose transition probability depends on its state at a fixed interval in the past. With this simplified model we can analytically compute interspike interval histograms, and show how the resonance between noise and delay arises. The resonance is also observed when such elements are coupled through delayed interaction. 1 Introd uction "Noise" and "delay" are two elements which are associated with many natural and artificial systems and have been studied in diverse fields. Neural networks provide representative examples of information processing systems with noise and delay. Though much research has gone into the investigation of these two factors in the community, they have mostly been separately studied (see e.g. [1]). Neural * Affiliated also with the Laboratory for Information Synthesis, RIKEN Brain Science Institute, Wako, Saitama, Japan Resonance in a Stochastic Neuron Model with Delayed Interaction 315 models incorporating both noise and delay are more realistic [2], but their complex characteristics have yet to be explored both theoretically and numerically. The main theme of this paper is the study of a simple stochastic neural model with delayed interaction which can generate spike trains. The most striking feature of this model is that it can show a regular spike pattern with suitably "tuned" noise and delay [3]. Stochastic resonance in neural information processing has been investigated by others (see e.g. [4]). This model, however, introduces a different type of such resonance, via delay rather than through an external oscillatory signal. It can be classified with models of stochastic resonance without an external signal [5]. The novelty of this model is the use of delay as the source of its oscillatory dynamics. To gain insight into the resonance, we simplify the model and study a stochastic binary element whose transition probability depends on its state at a fixed interval in the past. With this model, we can analytically compute interspike interval histograms, and show how the resonance between noise and delay arises. We further show that the resonance also occurs when such stochastic binary elements are coupled through delayed interaction. 2 Single Delayed-feedback Stochastic Neuron Model Our model is described by the following equations: d Jl dt Vet) -Vet) + W¢(V(t - r)) + eL(t) ¢(V( t)) 2 -1 1 + e-1)(V(t)-9) (1) where 11 and () are constants, and V is the membrane potential of the neuron. The noise term eL has the following probability distribution. 1 pee = u) 2L (-L 5: u 5: L) o (u<-L,u>L) , (2) i.e., eL is a time uncorrelated uniformly distributed noise in the range (-L, L). It can be interpreted as a fluctuation that is much faster than the membrane relaxation time Jl. The model can be interpreted as a stochastic neuron model with delayed self-feedback of weight W, which is an extension of a model with no delay previously studied using the Fokker- Planck equation [6]. We numerically study the following discretized version: 2 Vet + 1) = 1 + e-1)(V(t-T )- 9) - 1 + eL (3) We fix 11 and () so that this map has two basins of attractors of differing size with no delay, as shown in Figure l(A). We have simulated the map (3) with various noise widths and delays and find regular spiking behavior as shown in Fig l(C) for tuned noise width and delay. In case the noise width is too large or too small given self-feedback delay, this rhythmic behavior does not emerge, as shown in Fig1(B) and (D). We argue that the delay changes the effective shape of the basin of attractors into an oscillatory one, just like that due to an external oscillating force which, as is well-known, leads to stochastic resonance with a tuned noise width. The analysis of the dynamics given by (1) or (3), however, is a non- trivial task, particularly with 316 T. Ohira, Y. Sato and J. D. Cowan respect to the spike trains. A previous analysis using the Fokker-Planck equation cannot capture this emergence of regular spiking behavior. This difficulty motivates us to further simplify our model, as described in the next section. (A) cp (E) I ... , (0) (b) .~. ., ,.p (8) (F) V(t) X(t) .. , 200 ... m .oo 1000 t 100 t L-(e) (G) V(t) X(t) , , , .. .. , " , ,>--- I- I- L.. ~ I- L.. '-(D) (8) V(t) X(t) Figure 1: (A) The shape of the sigmoid function 4> (b) for", = 4 and 0 = 0.1. The straight line (a) is 4> = V and the crossings of the two lines indicate the stationary point of the dynamics. Also, the typical dynamics of V (t) from the map model are shown as we change noise width L. The values of L are (B) L = 0.2, (C) L = 0.4, (D) L = 0.7. The data is taken with T = 20, '" = 4.0, 0 = 0.1 and the initial condition V(t) = 0.0 for t E [-r,O]. The plots are shown between t = a to 1000. (E) Schematic view of the single binary model. Some typical dynamics from the binary model are also shown. The values of parameters are r = 10, q = 0.5, and (F) p = 0.005, (G) p = 0.05, and (H) p = 0.2. 3 Delayed Stochastic Binary N enron Model The model we now discuss is an approximation of the dynamics that retains the asymmetric stochastic transition and delay. The state X(t) of the system at time step t is either -lor 1. With the same noise eL, the model is described as follows: X(t + 1) O[f(X(t - T» + eLl, 1 f(n) = 2«a + b) + n(a - b», O[n] 1 (0 ~ n), -1 (0) n), (4) where a and b are parameters such that lal ~ L and Ibl ~ L, and r is the delay. This model is an approximate discretization of the space of map (3) into two states Resonance in a Stochastic Neuron Model with Delayed Interaction 317 with a and b controlling the bias of transition depending on the state of X r steps earlier. When a i- b, the transition between the two states is asymmetric, reflecting the two differing sized basins of attractors. We can describe this model more concisely in probability space (Figure I(E)). The formal definition is given as follows: P(I, t + 1) p, X(t - r) = -1, 1- q, X(t - r) = 1, P(-I,t+l) q, X(t - r) = 1, 1- p, X(t - r) = -1, 1 b p 2(1 + L)' 1 a (5) q 2(1 - L)' where P(s, t) is the probability that X(t) = s. Hence, the transition probability of the model depends on its state r steps in the past, and is a special case of a delayed random walk [7]. We randomly generate X(t) for the interval t = (-r, 0). Simulations are performed in which parameters are varied and X(t) is recorded for up to 106 steps. They appear to be qualitatively similar to those generated by the map dynamics (Figure I(F),(G),(H)). ;,From the trajectory X(t), we construct a residence time histogram h( u) for the system to be in the state -1 for u consecutive steps. Some examples of the histograms are shown in Figure 2 (q = 1 - q = 0.5, r = 10). We note that with p « 0.5, as in Figure 2(A), the model has a tendency to switch or spike to the X = 1 state after the time step interval of r. But the spike trains do not last long and result in a small peak in the histogram. For the case of Figure 2(C) where p is closer to 0.5, we observe less regular transitions and the peak height is again small. With appropriate p as in Figure 2(B), spikes tend to appear at the interval T more frequently, resulting in higher peaks in the histogram. This is what we mean by stochastic resonance (Figure 2(D)). Choosing an appropriate p is equivalent to "tuning" the noise width L, with other parameters appropriately fixed. In this sense, our model exhibits stochastic resonance. This model can be treated analytically. The first observation to make with the model is that given r, it consists of statistically independent r + 1 Markov chains. Each Markov chain has its state appearing at every r+l interval. With this property of the model, we label time step t by the two integers sand k as follows t=s(r+l)+k, (O::;s,O::;k::;r) (6) Let P±(t) == P±(s, k) be the probability for the state to be in the ±1 state at time t or (s, k). Then, it can be shown that P+(s, k) a(1 - ,S) + ,s P+(s = 0, k), P_(s, k) {3(1- ,S) + ,s P_(s = 0, k), p a , p+q {3 q --, p+q , 1 - (p + q). (7) In the steady state, P+(s --+ oo,k) == P+ = a and P_(s --+ oo,k) == P_ = {3. The steady state residence time histogram can be obtained by computing the following 318 T. Ohira, Y. Safo and J. D. Cowan quantity, h(u) = P(+;-,Uj+), which is the probability that the system takes consecutive -1 states U times between two + 1 states. With the definition of the model and statistical independence between Markov chains in the sequence, the following expression can be derived: P(+;-,Uj+) P+(P_)Up+ = (,8)U(a)2 (1 ~ U < r) (8) = p+(p-r(1- q) = (,8r(a)(1- q) (u = r) (9) = p+(p-r(q)(1- p)U-T(p) = (,8)U(p)2 (u > r) (10) With appropriate normalization, this expression reflects the shape of the histogram obtained by numerical simulations, as shown in Figure 2. Also, by differentiating equation (9) with respect to p, we derive the resonant condition for the peak to reach maximum height as q=pr (11) or, equivalently, L - a = (L + b)r. (12) In Figure 2(D), we see that maximum peak amplitude is reached by choosing parameters according to equation (11). We note that this analysis for the histogram is exact in the stationary limit, which makes this model unique among those showing stochastic resonance. (M" '::L b(a) ~ Ol~ ." 'M. j ~ ~ ~ ~ l~ I ~ ~ I'!. I'!.:O • (at "'::I~ ." hCD) .... Ol~ ... . ~. 1 ~ , " 10 12 ~ I' 17 ~ jO • (et "'~~ (I Ol~ b(ot .. " ... ~ ,', !> I" us UP!> 20 • ""L ", h(tt . "" ." ... . ~. 10 10 )0 to "'a .... " • Figure 2: Residence time histogram and dynamics of X(t) as we change p. The values of p are (A) p = 0.005, (B) p = 0.05, (C) p = 0.2. The solid line in the histogram is from the analytical expression given in equations (8-10). Also, in (D) we show a plot of peak height by varying p. The solid line is from equation (9). The parameters are r = 10, q = 0.5. 4 Delay Coupled Two Neuron Case We now consider a circuit comprising two such stochastic binary neurons coupled with delayed interaction. We observe again that resonance between noise and delay Resonance in a Stochastic Neuron Model with Delayed Interaction 319 takes place. The coupled two neuron model is a simple extension of the model in the previous section. The transition probability of each neuron is dependent on the other neuron's state at a fixed interval in the past. Formally, it can be described in probability space as follows. Pl(l, t + 1) PI! X 2(t - 72) = -1, 1- q!, X 2(t 72) = 1, Pl(-I,t+l) ql, X 2(t 72) = 1, 1 - PI, X 2(t 72) = -1, P2(1, t + 1) P2, Xl(t - 7d = -1, 1- q2, Xl(t - 7d = 1, P2(-I,t+l) q2, Xl(t - 71) = 1, 1- P2, Xl(t 71) = -1 (13) Pi(S, t) is the probability that the state of the neuron i is Xi(t) = s. We have performed simulation experiments on the model and have again found resonance between noise and delay. Though more intricate than the single neuron model, we can perform a similar theoretical analysis of the histograms and have obtained approximate results for some cases. For example, we obtain the following approximate analytical results for the peak height of the interspike histogram of Xl for the case 71 = 72 == 7. ( The peak occurs at 71 + 72 + 1.) H(Pl' P2, qI, q2) = {J.t3(P!, P2, ql, q2 )ql + J.t4(PI, P2, ql, q2)(1 - pd Y (14) J.tl (PI, P2, qI, q2) J.t2 (PI, P2, ql, q2) J.t3 (PI, P2, ql , q2 ) J.t4 (PI! P2, qI, q2 ) II (PbP2, ql, q2) 12(PI,P2, qI, q2) S(PI,P2, ql, q2) {J.tl(P!'P2, ql, q2)(qlq2PI + ql(l - q2)(1 - ql)) +J.t2(Pl,P2,ql,q2)((I- pdq2Pl + (1- Pl)(I- q2)(I- qd)} h (PI, P2, ql, q2)!2(PI, P2, ql, q2) (15) S(PI , P2 , ql , q2) II (PI ,P2, qI, q2) (16) S(PI,P2, q!, q2) !2(PI,P2, ql, q2) (17) s (PI! P2, ql, q2) 1 (18) S(PI,P2, ql, q2) PI(I - P2) + P2(1 - ql) (19) ql (1 - q2) + q2 (1 - qd P2 + PI (1 - P2 - q2) (20) q2 + ql(l - P2 -q2) h (PI, P2, ql, q2)!2(PI ,P2, ql, q2) + h(PI,P2, ql, q2) + !2(PI,P2, ql, q2) + 1 (21) These analytical results are compared with the simulation experiments, examples of which are shown in Figure 3. A detailed analysis, particularly for the case of 71 =I 72, is quite intricate and is left for the future. 5 Discussion There are two points to be noted. Firstly, although there are examples which may indicate that stochastic resonance is utilized in biological information processing, it is yet to be explored if the resonance between noise and delay has some role in 320 T. Ohira, Y. Sato and J. D. Cowan (A) 0 . 01 (8) (C) h(t) : :::~ ..00< ~ 0 . 00:3: 0. 008 h(t) ..... o.oo:z 0 .2 0 . 3 0 . 1 0 , ' 0 .1 0.' 0 . 5 pi .. .. Figure 3: A plot of peak height by varying P2. The solid line is from equation (1420). The parameters are T1 = T2 = 10, q1 = q2 = 0.5, (A)P1 = P2, (B) P1 = 0.005, (C) P1 = 0.025. neural information processing. Secondly, there are many investigations of spiking neural models and their applications (see e.g., [8]). Our model can be considered as a new mechanism for generating controlled stochastic spike trains. One can predict its application to weak signal transmission analogous to recent research using stochastic resonance with a larger number of units in series [9]. Investigations of the network model with delayed interactions are currently underway. References [1) Hertz, J. A., Krogh, A., & Palmer, R. G. (1991). Introduction to the Theory of Neural Computation. Redwood City: Addison-Wesley. [2) Foss, J., Longtin, A., Mensour, B., & Milton, J . G. (1996). Multistability and Delayed Recurrent Loops. Physical Review Letters, 76, 708-711; Pham, J., Pakdaman, K., Vibert, J.-F. (1998). Noise-induced coherent oscillations in randomly connected neural networks. Physical Review E, 58, 3610-3622; Kim, S., Park, S. H., Pyo, H.-B. (1999). Stochastic Resonance in Coupled Oscillator Systems with Time Delay. Physical Review Letters, 8!, 1620-1623; Bressloff, P. C. (1999). Synaptically Generated Wave Propagation in Excitable Neural Media. Physical Review Letters, 8!, 2979-2982. [3) Ohira, T. & Sato, Y. (1999). Resonance with Noise and Delay. Physical Review Letters, 8!, 2811-2815. [4) Gammaitoni, L., Hii.nggi, P., Jung, P., & Marchesoni, F.(1998). Stochastic Resonance. Review of Modem Physics, 70, 223-287. [5) Gang, H., Ditzinger, T., Ning, C. Z., & Haken, H.(1993) Stochastic Resonance without External Periodic Force. Physical Review Letters, 71, 807-810; Rappel, W-J. & Strogatz, S. H. (1994). Stochastic resonance in an autonomous system with a nonuniform limit cycle. Physical Review E, 50,3249-3250; Longtin, A. (1997). Autonomous stochastic resonance in bursting neurons. Physical Review E, 55, 868-876. [6) Ohira, T. & Cowan J. D. (1995). Stochastic Single Neurons, Neural Communication, 7518-528. [7) Ohira, T. & Milton, J. G. (1995). Delayed Random Walks. Physical Review E, 5!, 3277-3280; Ohira, T. (1997). Oscillatory Correlation of Delayed Random Walks, Physical Review E, 55, RI255-1258. [8) Maas, W. (1997). Fast Sigmoidal Network via Spiking Neurons. Neural Computation, 9(2), 279-304; Maas, W. (1996). Lower Bounds for the Computational Power of Networks of Spiking Neurons. Neural Computation, 8(1), 1-40. [9) Locher, M., Cigna, D., and Hunt, E. R. (1998). Noise Sustained Propagation of a Signal in Coupled Bistable Electric Elements Physical Review Letters, 80, 5212-5215.
|
1999
|
59
|
1,708
|
Population Decoding Based on an Unfaithful Model s. Wu, H. Nakahara, N. Murata and S. Amari RIKEN Brain Science Institute Hirosawa 2-1, Wako-shi, Saitama, Japan {phwusi, hiro, mura, amari}@brain.riken.go.jp Abstract We study a population decoding paradigm in which the maximum likelihood inference is based on an unfaithful decoding model (UMLI). This is usually the case for neural population decoding because the encoding process of the brain is not exactly known, or because a simplified decoding model is preferred for saving computational cost. We consider an unfaithful decoding model which neglects the pair-wise correlation between neuronal activities, and prove that UMLI is asymptotically efficient when the neuronal correlation is uniform or of limited-range. The performance of UMLI is compared with that of the maximum likelihood inference based on a faithful model and that of the center of mass decoding method. It turns out that UMLI has advantages of decreasing the computational complexity remarkablely and maintaining a high-level decoding accuracy at the same time. The effect of correlation on the decoding accuracy is also discussed. 1 Introduction Population coding is a method to encode and decode stimuli in a distributed way by using the joint activities of a number of neurons (e.g. Georgopoulos et aI., 1986; Paradiso, 1988; Seung and Sompo1insky, 1993). Recently, there has been an expanded interest in understanding the population decoding methods, which particularly include the maximum likelihood inference (MLI), the center of mass (COM), the complex estimator (CE) and the optimal linear estimator (OLE) [see (Pouget et aI., 1998; Salinas and Abbott, 1994) and the references therein]. Among them, MLI has an advantage of having small decoding error (asymptotic efficiency), but may suffers from the expense of computational complexity. Let us consider a population of N neurons coding a variable x. The encoding process of the population code is described by a conditional probability q(rlx) (Anderson, 1994; Zemel et aI., 1998), where the components of the vector r = {rd for i = 1,···, N are the firing rates of neurons. We study the following MLI estimator given by the value of x that maximizes the log likelihood Inp(rlx), where p(rlx) is the decoding model which might be different from the encoding model q(rlx). So far, when people study MLI in a population code, it normally (or implicitly) assumes that p(rlx) is equal to the encoding model q(rlx). This requires that the estimator has full knowledge of the encoding process. Taking account of the complexity of the information process in the brain, it is more natural Population Decoding Based on an Unfaithful Model 193 to assume p(rlx) :I q(rlx). Another reason for choosing this is for saving computational cost. Therefore, a decoding paradigm in which the assumed decoding model is different from the encoding one needs to be studied. In the context of statistical theory, this is called estimation based on an unfaithful or a misspecified model. Hereafter, we call the decoding paradigm of using MLI based on an unfaithful model, UMLI, to distinguish from that of MLI based on the faithful model, which is called FMLI. The unfaithful model studied in this paper is the one which neglects the pair-wise correlation between neural activities. It turns out that UMLI has attracting properties of decreasing the computational cost of FMLI remarkablely and at the same time maintaining a high-level decoding accuracy. 2 The Population Decoding Paradigm of UMLI 2.1 An Unfaithful Decoding Model of Neglecting the Neuronal Correlation Let us consider a pair-wise correlated neural response model in which the neuron activities are assumed to be multivariate Gaussian IlL -1 q(rlx) = exp[--A (r · - f-(x))(r · - f ·(x))] J(21ra 2)N det(A) 2a2 . . 1J 1 1 J J , 1,J (I) where fi(X) is the tuning function. In the present study, we will only consider the radial symmetry tuning function. Two different correlation structures are considered. One is the uniform correlation model (Johnson, 1980; Abbott and Dayan, 1999), with the covariance matrix Aij = 8ij + c(l - 8ij ), (2) where the parameter c (with -1 < c < 1) determines the strength of correlation. The other correlation structure is of limited-range (Johnson, 1980; Snippe and Koenderink, 1992; Abbott and Dayan, 1999), with the covariance matrix A ·· - b1i- jl lJ , (3) where the parameter b (with 0 < b < 1) determines the range of correlation. This structure has translational invariance in the sense that Aij = A kl , if Ii - jl = Ik - ll. The unfaithful decoding model, treated in the present study, is the one which neglects the correlation in the encoding process but keeps the tuning functions unchanged, that is, (4) 2.2 The decoding error of UMLI and FMLI The decoding error of UMLI has been studied in the statistical theory (Akahira and Takeuchi, 1981; Murata et al., 1994). Here we generalize it to the population coding. For convenience, some notations are introduced. \If(r,x) denotes df(r,x)/dx. Eq[f(r,x)] and Vq[f(r,x)] denote, respectively, the mean value and the variance of f(r, x) with respect to the distribution q(rlx). Given an observation of the population activity r*, the UMLI estimate x is the value of x that maximizes the log likelihood Lp(r*,x) = Inp(r*lx). Denote by Xopt the value of x satisfying Eq[\l Lp(r, xopd] = O. For the faithful model where p = q, Xopt = x. Hence, (xopt - x) is the error due to the unfaithful setting, whereas (x - Xopt) is the error due to sampling fluctuations. For the unfaithful model (4), 194 s. Wu, H. Nakahara, N. Murata and S. Amari since Eq[V' Lp(r, Xopt)] = 0, Li[/i{x) - /i(xopdlfI(xopt) = O. Hence, Xopt = x and UMLI gives an unbiased estimator in the present cases. Let us consider the expansion of V' Lp(r*, x) at x. V'Lp(r*,x) ~ V'Lp(r*,x) + V'V'Lp{r*,x) (x - x). (5) Since V' Lp(r*, x) = 0, ~ V'V'Lp{r*,x) (x - x) ~ - ~ V'Lp(r*,x), (6) where N is the number of neurons. Only the large N limit is considered in the present study. Let us analyze the properties of the two random variables ~ V'V' Lp (r* , x) and ~ V' Lp(r*, x). We consider first the uniform correlation model. For the uniform correlation structure, we can write r; = /i(x) + O"(Ei + 11), (7) where 11 and {Ei}, for i = 1,···, N, are independent random variables having zero mean and variance c and 1 - c, respectively. 11 is the common noise for all neurons, representing the uniform character of the correlation. By using the expression (7), we get ~ V'Lp{r*,x) ;0" L Ed: (x) + ;0" L fI (x), i . (8) + ;0" Lf:'(x). (9) t Without loss of generality, we assume that the distribution of the preferred stimuli is uniform. For the radial symmetry tuning functions, ~ Li fI(x) and ~ Li fI'(x) approaches zero when N is large. Therefore, the correlation contributions (the terms of 11) in the above two equations can be neglected. UMLI performs in this case as if the neuronal signals are uncorrelated. Thus, by the weak law of large numbers, ~ V'V' Lp(r*, x) (10) where Qp == Eq[V'V' Lp(r, x)]. According to the central limit theorem, V' Lp (r*, x) / N converges to a Gaussian distribution ~ V'Lp{r*,x) N(O, ~~O"~ LfHx)2) N(O, ~~), (11) where N(O, t2 ) denoting the Gaussian distribution having zero mean and variance t, and Gp == Vq[V'Lp(r, x)]. Population Decoding Based on an UnfaithfUl Model 195 Combining the results of eqs.(6), (10) and (11), we obtain the decoding error of UMLI, (x - x)UMLI N(O, Q;2Gp), (1 - c)a 2 = N(O, Li fHx)2)· (12) In the similar way, the decoding error of FMLI is obtained, (x - x)FMLI N(O, Q~2Gq) , (1 - c)a2 = N(O, Li fI(x)2) ' (13) which has the same form as that of UMLI except that Q q and G q are now defined with respect to the faithful decoding model, i.e., p(rlx) = q(rlx) . To get eq.(13), the condition Li fI(x) = ° is used. Interestingly, UMLI and FMLI have the same decoding error. This is because the uniform correlation effect is actually neglected in both UMLI and FMLI. Note that in FMLI, Qq = Gq = Vq[\7 Lq(rlx)] is the Fisher information. Q-;;2Gq is the Cramer-Rao bound, which is the optimal accuracy for an unbiased estimator to achieve. Eq.(13) shows that FMLI is asymptotically efficient. For an unfaithful decoding model, Qp and Gp are usually different from the Fisher information. We call Q;2Gp the generalized Cramer-Rao bound, and UMLI quasi-asymptotically efficient if its decoding error approaches Q;2Gp asymptotically. Eq.( 12) shows that UMLI is quasi-asymptotic efficient. In the above, we have proved the asymptotic efficiency of FMLI and UMLI when the neuronal correlation is uniform. The result relies on the radial symmetry of the tuning function and the uniform character of the correlation, which make it possible to cancel the correlation contributions from different neurons. For general tuning functions and correlation structures, the asymptotic efficiency of UMLI and FMLI may not hold. This is because the law of large numbers (eq.(IO» and the central limit theorem (eq.(II» are not in general applicable. We note that for the limited-range correlation model, since the correlation is translational invariant and its strength decreases quickly with the dissimilarity in the neurons' preferred stimuli, the correlation effect in the decoding of FMLI and UMLI becomes negligible when N is large. This ensures that the law of large numbers and the central limit theorem hold in the large N limit. Therefore, UMLI and FMLI are asymptotically efficient. This is confirmed in the simulation in Sec.3. When UMLI and FMLI are asymptotic efficient, their decoding errors in the large N limit can be calculated according to the Cramer-Rao bound and the generalized Cramer-Rao bound, respectively, which are 3 Performance Comparison a2 L ij AidI(x)fj(x) [L iUI(X))2J2 a2 L ij Aijl f;(x)fj(x)· (14) (15) The performance of UMLI is compared with that of FMLI and of the center of mass decoding method (COM). The neural population model we consider is a regular array of N neurons (Baldi and Heiligenberg, 1988; Snippe, 1996) with the preferred stimuli uniformly distributed in the range [-D , DJ, that is, Ci = -D + 2iD /(N + 1), for i = 1, · .. ,N . The comparison is done at the stimulus x = 0. 196 s. Wu, H. Nakahara, N. Murata and S. Amari COM is a simple decoding method without using any information of the encoding process, whose estimate is the averaged value of the neurons' preferred stimuli weighted by the responses (Georgopoulos et aI., 1982; Snippe, 1996), i.e., A E i rici x - ==:--Ei r i . The shortcoming of COM is a large decoding error. For the population model we consider, the decoding error of COM is calculated to be (16) ( 17) where the condition E i Ii (x )Ci = 0 is used, due to the regularity of the distribution of the preferred stimuli. The tuning function is Gaussian, which has the form (x - Ci)2 Ii(x) = exp[2a2 ], (18) where the parameter a is the tuning width. We note that the Gaussian response model does not give zero probability for negative firing rates. To make it more reliable, we set ri = 0 when fi(X) < 0.11 (Ix - cil > 3a), which means that only those neurons which are active enough contribute to the decoding. It is easy to see that this cut-off does not effect much the results of UMLI and FMLI, due to their nature of decoding by using the derivative of the tuning functions. Whereas, the decoding error of COM will be greatly enlarged without cut-off. For the tuning width a, there are N = Int[6a/d - 1J neurons involved in the decoding process, where d is the difference in the preferred stimuli between two consecutive neurons and the function Int[·J denotes the integer part of the argument. In all experiment settings, the parameters are chosen as a = 1 and (J = 0.1. The decoding errors of the three methods are compared for different values of N when the correlation strength is fixed (c = 0.5 for the uniform correlation case and b = 0.5 for the limited-range correlation case), or different values of the correlation strength when N is fixed to be 50. Fig.l compares the decoding errors of the three methods for the uniform correlation model. It shows that UMLI has the same decoding error as that of FMLI, and a lower error than that of COM. The uniform correlation improves the decoding accuracies of the three methods (Fig.lb). In Fig.2, the simulation results for the decoding errors of FMLI and UMLI in the limitedrange correlation model are compared with those obtained by using the Cramer-Rao bound and the generalized Cramer-Rao bound, respectively. It shows that the two results agree very well when the number of neurons, N, is large, which means that FMLI and UMLI are asymptotic efficient as we analyzed. In the simulation, the standard gradient descent method is used to maximize the log likelihood, and the initial guess for the stimulus is chosen as the preferred stimulus of the most active neuron. The CPU time of UMLI is around 1/5 of that of FMLI. UMLI reduces the computational cost of FMLI significantly. Fig.3 compares the decoding errors of the three methods for the limited-range correlation model. It shows that UMLI has a lower decoding error than that of COM. Interestingly, UMLI has a comparable performance with that of FMLI for the whole range of correlation. The limited-range correlation degrades the decoding accuracies of the three methods when the strength is small and improves the accuracies when the strength is large (Fig.3b). Population Decoding Based on an Unfaithfol Model 0015 " _-~ _ _ __ ~_~ -FMLI. UMLI ••• • . COM ~ 0000 ~-...;.====::;::==== ~ ~ M ~ ~ N (a) . ~ '. g 0010 .. ........ .. W •••• '" c: '6 8 ~ 0 005 r ................ -FMLI. UMLI .... · COM ............... '. ~L~============~··~··~ ···~··~· ' 0000 L . , 0 0 0 2 0 4 08 0 8 1 0 C (b) 197 Figure 1: Comparing the decoding errors of UMLI, FMLI and COM for the uniform correlation model. 0015 _ __ -_--- CRB. boO.5 -SMR, b=O.5 --- CRB. boO.S _ _ -.;I SMA, b=08 O OOO~·~---~~--M---~~-~ I OO N (a) 0015 " _-----___ _ GCRB. boO.5 i I SMR. boO.5 , t - - GCRB. boO.S ~ T~~"""T"" "'_WC) SMR. b::O.8 I ! 0010, ~"'1.~~ . 1 c: , '6 8 , ~ OOO5 r OOOO~~--~~~-~OO~-~~=--~ 100 N (b) Figure 2: Comparing the simulation results of the decoding errors of UMLI and FMLI in the limited-range correlation model with those obtained by using the Cramer-Rao bound and the generalized Cramer-Rao bound, respectively. CRB denotes the Cramer-Rao bound, GCRB the generalized Cramer-Rao bound, and SMR the simulation result. In the simulation, 10 sets of data is generated, each of which is averaged over 1000 trials. (a) FMLI; (b) UMLI. 4 Discussions and Conclusions We have studied a population decoding paradigm in which MLI is based on an unfaithful model. This is motivated by the facts that the encoding process of the brain is not exactly known by the estimator. As an example, we consider an unfaithful decoding model which neglects the pair-wise correlation between neuronal activities. Two different correlation structures are considered, namely, the uniform and the limited-range correlations. The performance of UMLI is compared with that of FMLI and COM. It turns out that UMLI has a lower decoding error than that of COM. Compared with FMLI, UMLI has comparable performance whereas with much less computational cost. It is our future work to understand the biological implication of UMLI. As a by-product of the calculation, we also illustrate the effect of correlation on the decoding accuracies. It turns out that the correlation, depending on its form, can either improve or degrade the decoding accuracy. This observation agrees with the analysis of Abbott and Dayan (Abbott and Dayan, 1999), which is done with respect to the optimal decoding accuracy, i.e., the Cramer-Rao bound. 198 0020 ~-~--___ --~ \ 0015 .. "" ~ .. " .... 0> .~ 0010 ~ "0 ~ o OOS -FMU ---UMU ----- COM '--"----,-,-"-"-OOOO~~ 1 --~-~~--~OO---~'00 N (a) s. Wu, H. Nakahara, N Murata and S. Amari O ~ -__ ~ ____ _ 003 -FMU ---UMU , •••. COM b (b) Figure 3: Comparing the decoding errors of UMLI, FMLI and COM for the limited-range correlation modeL Acknowledgment We thank the three anonymous reviewers for their valuable comments and insight suggestion. S. Wu acknowledges helpful discussions with Danmei Chen. References L. F. Abbott and P. Dayan. 1999. The effect of correlated variability on the accuracy of a population code. Neural Computation. II :91-101. M. Akahira and K. Takeuchi. 1981. Asymptotic efficiency of statistical estimators: concepts and high order asymptotic efficiency. In Lecture Notes in Statistics 7. C. H. Anderson. 1994. Basic elements of biological computational systems. International Journal of Modern Physics C, 5:135-137. P. Baldi and W. Heiligenberg. 1988. How sensory maps could enhance resolution through ordered arrangements of broadly tuned receivers. Biol. Cybern .. 59:313-318. A. P. Georgopoulos. 1. F. Kalaska, R. Caminiti. and 1. T. Massey. 1982. On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex. 1. Neurosci .• 2:1527-1537. K. O. Johnson. 1980. Sensory discrimination: neural processes preceding discrimination decision. J. Neurophy .• 43:1793-1815. M. Murata. S. Yoshizawa. and S. Amari. 1994. Network information criterion-determining the number of hidden units for an artificial neural network model. IEEE. Trans. Neural Networks. 5:865872. A. Pouget. K. Zhang. S. Deneve. and P. E. Latham. 1998. Statistically efficient estimation using population coding. Neural Computation. 10:373-401. E. Salinas and L. F. Abbott. 1994. Vector reconstruction from firing rates. Journal of Computational Neuroscience, 1:89-107. H. P. Snippe and J. J. Koenderink. 1992. Information in channel-coded systems: correlated receivers. Biological Cybernetics. 67: 183-190. H. P. Snippe. 1996. Parameter extraction from population codes: a critical assessment. Neural Computation. 8:511-529. R. S. Zemel. P. Dayan. and A. Pouget. 1998. Population interpolation of population codes. Neural Computation. 10:403-430.
|
1999
|
6
|
1,709
|
Wiring optimization in the brain Dmitri B. Chklovskii Sloan Center for Theoretical Neurobiology The Salk Institute La Jolla, CA 92037 mitya@salk.edu Charles F. Stevens Howard Hughes Medical Institute and Molecular Neurobiology Lab The Salk Institute Abstract La Jolla, CA 92037 stevens@salk.edu The complexity of cortical circuits may be characterized by the number of synapses per neuron. We study the dependence of complexity on the fraction of the cortical volume that is made up of "wire" (that is, ofaxons and dendrites), and find that complexity is maximized when wire takes up about 60% of the cortical volume. This prediction is in good agreement with experimental observations. A consequence of our arguments is that any rearrangement of neurons that takes more wire would sacrifice computational power. Wiring a brain presents formidable problems because of the extremely large number of connections: a microliter of cortex contains approximately 105 neurons, 109 synapses, and 4 km ofaxons, with 60% of the cortical volume being taken up with "wire", half of this by axons and the other half by dendrites. [ 1] Each cortical neighborhood must have exactly the right balance of components; if too many cell bodies were present in a particular mm cube, for example, insufficient space would remain for the axons, dendrites and synapses. Here we ask "What fraction of the cortical volume should be wires (axons + dendrites)?" We argue that physiological properties ofaxons and dendrites dictate an optimal wire fraction of 0.6, just what is actually observed. To calculate the optimal wire fraction, we start with a real cortical region containing a fixed number of neurons, a mm cube, for example, and imagine perturbing it by adding or subtracting synapses and the axons and dendrites needed to support them. The rules for perturbing the cortical cube require that the existing circuit connections and function remain intact (except for what may have been removed in the perturbation), that no holes are created, and that all added (or subtracted) synapses are typical of those present; as wire volume is added, the volume of the cube of course increases. The ratio of the number of synapses per neuron in the perturbed cortex to that in the real cortex is denoted by 8, a parameter we call the relative complexity. We require that the volume of non-wire components (cell bodies, blood vessels, glia, etc) is unchanged by our perturbation and use 4> to denote the volume fraction of the perturbed cortical region that is made up of wires (axons + dendrites; 4> can vary between zero and one), with the fraction for the real brain being 4>0. The relation between relative complexity 8 and wire volume fraction 4> is given by the equation (derived in Methods) 1 (1-4»2/34> 8--,\5 1 - 4>0 4>0· (I) 104 (I) ?1 )( 2 ..!! 1 Q.. E o u D. B. Chklovskii and C. F Stevens :\ = .9 :\=1 O~----r---~-----.----~--~ 0.0 0.2 0.4 0.6 0.8 1.0 Wire fraction (+) Figure I: Relative complexity (0) as a function of volume wire fraction (e/». The graphs are calculated from equation (1) for three values of the parameter A as indicated; this parameter determines the average length of wire associated with a synapse (relative to this length for the real cortex, for which (A = 1). Note that as the average length of wire per synapse increases, the maximum possible complexity decreases. For the following discussion assume that A = 1; we return to the meaning of this parameter later. To derive this equation two assumptions are made. First, we suppose that each added synapse requires extra wire equal to the average wire length and volume per synapse in the unperturbed cortex. Second, because adding wire for new synapses increases the brain volume and therefore increases the distance axons and dendrites must travel to maintain the connections they make in the real cortex, all of the dendrite and unmyelinated axon diameters are increased in proportion to the square of their length changes in order to maintain the intersynaptic conduction times[2] and dendrite cable lengths[3] as they are in the actual cortex. If the unmyelinated axon diameters were not increased as the axons become longer, for example, the time for a nerve impulse to propagate from one synapse to the next would be increased and we would violate our rule that the existing circuit and its function be unchanged. We note that the vast majority of cortical axons are unmyelinated.[l] The plot of o as a function of e/> is parabolic-like (see Figure 1) with a maximum value at e/> = 0.6, a point at which dO/de/> = O. This same maximum value is found for any possible value of e/>o, the real cortical wire fraction. Why does complexity reach a maximum value at a particular wire fraction? When wire and synapses are added, a series of consequences can lead to a runaway situation we call the wiring catastrophe. If we start with a wire fraction less than 0.6, adding wire increases the cortical volume, increased volume makes longer paths for axons to reach their targets which requires larger diameter wires (to keep conduction delays or cable attenuation constant from one point to another), the larger wire diameters increase cortex volume which means wires must be longer, etc. While the wire fraction e/> is less than 0.6, increasing complexity is accompanied by finite increases in e/>. At e/> = 0.6 the rate at which wire fraction increases with complexity becomes infinite de/>/dO --t 00); we have reached the wiring catastrophe. At this point, adding wire becomes impossible without decreasing complexity or making other changes - like decreasing axon diameters - that alter cortical function. The physical cause of the catastrophe is a slow growth of conduction velocity and dendritic cable length with diameter combined with the requirement that the conduction times between synapses (and dendrite cable lengths) be unchanged in the perturbed cortex. We assumed above that each synapse requires a certain amount of wire, but what if we could Wiring Optimization in the Brain 105 add new synapses using the wire already present? We do not know what factors determine the wire volume needed to support a synapse, but if the average amount of wire per synapse could be less (or more) than that in the actual cortex, the maximum wire fraction would still be 0.6. Each curve in Figure 1 corresponds to a different assumed average wire length required for a synapse (determined by A), and the maximum always occurs at 0.6 independent of A. In the following we consider only situations in which A is fixed. For a given A, what complexity should we expect for the actual cortex? Three arguments favor the maximum possible complexity. The greatest complexity gives the largest number of synapses per neuron and this permits more bits of information to be represented per neuron. Also, more synapses per neuron decreases the relative effect caused by the loss or malfunction of a single synapse. Finally, errors in the local wire fraction would minimally affect the local complexity because d(} / dqJ = 0 at if> = 0.6. Thus one can understand why the actual cortex has the wire fraction we identify as optimal. [ 1] This conclusion that the wire fraction is a maximum in the real cortex has an interesting consequence: components of an actual cortical circuit cannot be rearranged in a way that needs more wire without eliminating synapses or reducing wire diameters. For example, if intermixing the cell bodies of left and right eye cells in primate primary visual cortex (rather than separating them in ocular dominance columns) increased the average length of the wire[4] the existing circuit could not be maintained just by a finite increase in volume. This happens because a greater wire length demanded by the rearrangement of the same circuit would require longer wire per synapse, that is, an increased A. As can be seen from Figure I, brains with A > 1 can never achieve the complexity reached at the maximum of the A = 1 curve that corresponds to the actual cortex. Our observations support the notion that brains are arranged to minimize wire length. This idea, dating back to Cajal[5], has recently been used to explain why retinotopic maps exist[6],[7], why cortical regions are separated, why ocular dominance columns are present in primary visual cortex[4],[8],[9] and why the cortical areas and flat worm ganglia are placed as they are. [ 10-13] We anticipate that maximal complexity/minimal wire length arguments will find further application in relating functional and anatomical properties of brain. Methods The volume of the cube of cortex we perturb is V, the volume of the non-wire portion is W (assumed to be constant), the fraction of V consisting of wires is if>, the total number of synapses is N, the average length of axonal wire associated with each synapse is s, and the average axonal wire volume per unit length is h; the corresponding values for dendrites are indicated by primes (s' and h'). The unperturbed value for each variable has a 0 subscript; thus the volume of the cortical cube before it is perturbed is Vo = Wo + No(soho + s~h~). (2) We now define a "virtual" perturbation that we use to explore the extent to which the actual cortical region contains an optimal fraction of wire. If we increase the number of synapses by a factor () and the length of wire associated with each synapse by a factor A, then the perturbed cortical cube's volume becomes Vo = Wo + A(} (Nosoho ~ + Nos~h~ ~~) (V/VO)1/3 . (3) This equation allows for the possibility that the average wire diameter has been perturbed and increases the length of all wire segments by the "mean field" quantity (V/VO)1/3 to take account of the expansion of the cube by the added wire; we require our perturbation disperses the added wire as uniformly as possible throughout the cortical cube. 106 D. B. Chklovskii and C. F. Stevens To simplify this relation we must eliminate h/ ho and h' / h&; we consider these terms in tum. When we perturb the brain we require that the average conduction time (s/u, where u is the conduction velocity) from one synapse to the next be unchanged so that s/u = so/uo, or ~ = !.... = '\so(V/VO)1/3 = '\(V/VO)1/3. uo So So (4) Because axon diameter is proportional to the square of conduction velocity u and the axon volume per unit length h is proportional to diameter squared, h is proportional to u 4 and the ratio h/ ho can be written as (5) For dendrites, we require that their length from one synapse to the next in units of the cable length constant be unchanged by the perturbation. The dendritic length constant is proportional to the square root of the dendritic diameter d, so 8/.fd = 80/ ViIO or ~ = (;~)2 = (,\(V/VO)1/3f =,\2(V/Vo)2/3. (6) Because dendritic volumes per unit length (h and h') vary as the square of the diameters, we have that ~~ = (~) 2 =,\4 (V/VO)4/3. (7) The equation (2) can thus be rewritten as V = Wo + No(soho + s~h~)B,\5 (V/Vo)5/3 . (8) Divide this equation by Vo, define v = VIVo, and recognize that Wo/Vo = (1 - 4>0) and that 4>0 = No(soho + s&h&)/Vo ; the result is (9) Because the non-wire volume is required not to change with the perturbation, we know that Wo = (1 - 4>o)Vo = (1 - 4»V which means that v = (1 - 4>0)/(1 - 4»; substitute this in equation (9) and rearrange to give = ~ ( 1 4> ) 2/3 .!t B ,\5 1 - 4>0 4>0 . (1) the equation used in the main text. We have assumed that conduction velocity and the dendritic cable length constant vary exactly with the square root of diameter[2],[ 14] but if the actual power were to deviate slightly from 112 the wire fraction that gives the maximum complexity would also differ slightly from 0.6. Acknowledgments This work was supported by the Howard Hughes Medical Institute and a grant from NIH to C.F.S. D.C. was supported by a Sloan Fellowship in Theoretical Neurobiology. Wiring Optimization in the Brain 107 References [1] Braitenberg. V. & Schuz. A. Cortex: Statistics and Geometry of Neuronal Connectivity (Springer. 1998). [2] Rushton. W.A.H. A Theory of the Effects of Fibre Size in Medullated Nerve. J. Physiol. 115. 101-122 (1951). [3] Bekkers. J.M. & Stevens. C.F. Two different ways evolution makes neurons larger. Prog Brain Res 83. 37-45 (1990). [4] Mitchison. G. Neuronal branching patterns and the economy of cortical wiring. Proc R Soc Lond B Bioi Sci 245.151-158 (1991). [5] Cajal. S.R.Y. Histology of the Nervous System 1-805 (Oxford University Press, 1995). [6] Cowey. A. Cortical maps and visual perception: the Grindley Memorial Lecture. Q J Exp Phychol 31. 1-17 (1979). [7] Allman J.M. & Kaas J.H. The organization of the second visual area (V II) in the owl monkey: a second order transformation of the visual hemifield. Brain Res 76: 247-65 (1974). [8] Durbin. R. & Mitchison. G. A dimension reduction framework for understanding cortical maps. Nature 343. 644-647 (1990). [9] Mitchison. G. Axonal trees and cortical architecture. Trends Neurosci 15. 122-126 (1992). [10] Young. M.P. Objective analysis of the topological organization of the primate cortical visual system. Nature 358. 152-154 (1992). [11] Cherniak. C. Loca1 optimization of neuron arbors. Bioi Cybern 66. 503-510 (1992). [12] Cherniak. C. Component placement optimization in the brain. J Neurosci 14.2418-2427 (1994). [13J Cherniak. C. Neural component placement. Trends Neurosci 18.522-527 (1995). [14J Rall. W. in Handbook of Physiology, The Nervous Systems, Cellular Biology of Neurons (ed. Brookhart. J.M.M .• V.B.) 39-97 (Am. Physiol. Soc .• Bethesda. MD. 1977).
|
1999
|
60
|
1,710
|
Learning from user feedback in image retrieval systems Nuno Vasconcelos Andrew Lippman MIT Media Laboratory, 20 Ames St, E15-354, Cambridge, MA 02139, {nuno,lip} @media.mit.edu, http://www.media.mit.edwnuno Abstract We formulate the problem of retrieving images from visual databases as a problem of Bayesian inference. This leads to natural and effective solutions for two of the most challenging issues in the design of a retrieval system: providing support for region-based queries without requiring prior image segmentation, and accounting for user-feedback during a retrieval session. We present a new learning algorithm that relies on belief propagation to account for both positive and negative examples of the user's interests. 1 Introduction Due to the large amounts of imagery that can now be accessed and managed via computers, the problem of content-based image retrieval (CBIR) has recently attracted significant interest among the vision community [1, 2, 5]. Unlike most traditional vision applications, very few assumptions about the content of the images to be analyzed are allowable in the context of CBIR. This implies that the space of valid image representations is restricted to those of a generic nature (and typically of low-level) and consequently the image understanding problem becomes even more complex. On the other hand, CBIR systems have access to feedback from their users that can be exploited to simplify the task of finding the desired images. There are, therefore, two fundamental problems to be addressed. First, the design of the image representation itself and, second, the design of learning mechanisms to facilitate the interaction. The two problems cannot, however, be solved)n isolation as the careless selection of the representation will make learning more difficult and vice-versa . ...The impact of a poor image representation on the difficulty ofthe learning problem is visible in CBIR systems that rely on holistic metrics of image similarity, forcing user-feedback to be relative to entire images. In response to a query, the CBII<System· suggests a few images and the user rates those images according to how well they satisfy the goals of the search. Because each image usually contains several different objects or visual concepts, this rating is both difficult and inefficient. How can the user rate an iJpage that contains the concept of interest, but in which this concept only occupies 30% of the field of view, the remaining 70% being filled with completely unrelated stuff? And_how many example images will the CBIR system have to see, in order to figure out what the concept of interest is? A much better interaction paradigm is to let the user explicitly select the regions of the image that are relevant to the search, i.e. user-feedback at the region level. However, region-based feedback requires sophisticated image representations. The problem is that the most obvious choice, object-based representations, is difficult to implement because it is still too hard to segment arbitrary images in a meaningful way. We have argued 978 N Vasconcelos and A. Lippman that a better fonnulation is to view the problem as one of Bayesian inference and rely on probabilistic image representations. In this paper we show that this fonnulation naturally leads to 1) representations with support for region-based interaction without segmentation and 2) intuitive mechanisms to account for both positive and negative user feedback. 2 Retrieval as Bayesian inference The standard interaction paradigm for CBIR is the so-called "query by example", where the user provides the system with a few examples, and the system retrieves from the database images that are visually similar to these examples. The problem is naturally fonnulated as one of statistical classification: given a representation (or feature) space F the goal is to find a map 9 : F --+ M = {I, ... , K} from F to the set M of image classes in the database. K, the cardinality of M, can be as large as the number of items in the database (in which case each item is a class by itself), or smaller. If the goal of the retrieval system is to minimize the probability of error, it is well known that the optimal map is the Bayes classifier [3] g*(x) = argmaxP(Si = llx) = arg max P(XISi = I)P(Si = 1) (1) t t where x are the example features provided by the user and Si is a binary variable indicating the selection of class i. In the absence of any prior infonnation about which class is most suited for the query, an uninfonnative prior can be used and the optimal decision is the maximum likelihood criteria g*(x) = argmaxP(xlSi = 1). (2) t Besides theoretical soundness, Bayesian retrieval has two distinguishing properties of practical relevance. First, because the features x in equation (1) can be any subset of a given query image, the retrieval criteria is valid for both region-based and image-based queries. Second, due to its probabilistic nature, the criteria also provides a basis for designing retrieval systems that can account for user-feedback through belief propagation. 3 Bayesian relevance feedback Suppose that instead of a single query x we have a sequence of t queries {XI, ... , Xt}, where t is a time stamp. By simple application of Bayes rule P(Si = llxl,'" ,Xt) = 'YtP(XtISi = I)P(Si = IlxI,'" ,Xt-d, (3) where 'Yt is a nonnalizing constant and we have assumed that, given the knowledge of the correct image class, the current query Xt is independent of the previous ones. This basically means that the user provides the retrieval system with new infonnation at each iteration of the interaction. Equation (3) is a simple but intuitive mechanism to integrate infonnation over time. It states that the system's beliefs about the user's interests at time t - 1 simply become the prior beliefs for iteration t. New data provided by the user at time t is then used to update these beliefs, which in turn become the priors for iteration t + 1. From a computational standpoint the procedure is very efficient since the only quantity that has to be computed at each time step is the likelihood of the data in the corresponding query. Notice that this is exactly equation (2) and would have to be computed even in the absence of any learning. By taking logarithms and solving for the recursion, equation (3) can also be written as t-I t-J log P(Si = I1xJ,'" ,Xt) = 2: log 'Yt-k + 2: log P(Xt-k lSi = I) + log P(Si = 1), k=O k=O (4) Learning from User Feedback in Image Retrieval Systems 979 exposing the main limitation of the belief propagation mechanism: for large t the contribution, to the right-hand side of the equation, of the new data provided by the user is very small, and the posterior probabilities tend to remain constant. This can be avoided by penalizing older tenns with a decay factor at-k t-l t-l L at-k log,t-k + L at-k 10gP(xt-kISi = 1) + k=O k=O aologP(Si = 1), where at is a monotonically decreasing sequence. In particular, if at-k = a( 1 - a)k, a E (0,1] we have 10gP(Si = llxl, ... ,Xt) = log,: +alogP(xtISi = 1) + (1 - a) log P(Si = llxl, ... , Xt-l). Because,: does not depend on i, the optimal class is S; = argm~{alogP(xtISi = 1) + (1- a) 10gP(Si = llxl, ... ,Xt-J}}. (5) t 4 Negative feedback In addition to positive feedback, there are many situations in CBIR where it is useful to rely on negative user-feedback. One example is the case of image classes characterized by overlapping densities. This is illustrated in Figure 1 a) where we have two classes with a common attribute (e.g. regions of blue sky) but different in other aspects (class A also contains regions of grass while class B contains regions of white snow). If the user starts with an image of class B (e.g. a picture of a snowy mountain), using regions of sky as positive examples is not likely to quickly take hirnlher to the images of class A. In fact, all other factors being equal, there is an equal likelihood that the retrieval system will return images from the two classes. On the other hand, if the user can explicitly indicate interest in regions of sky but not in regions of snow, the likelihood that only images from class A will be returned increases drastically. a) b) c) d) Figure 1: a) two overlapping image classes. b) and c) two images in the tile database. d) three examples of pairs of visually similar images that appear in different classes. Another example of the importance of negative feedback are local minima of the search space. These happen when in response to user feedback, the system returns exactly the same images as in a previous iteration. Assuming that the user has already given the system all the possible positive feedback, the only way to escape from such minima is to choose some regions that are not desirable and use them as negative feedback. In the case of the example above, if the user gets stuck with a screen full of pictures of white mountains, he/she can simply select some regions of snow to escape the local minima. In order to account for negative examples, we must penalize the classes under which these score well while favoring the classes that assign a high score to the positive examples. 980 N. Vasconcelos and A. Lippman Unlike positive examples, for which the likelihood is known, it is not straightforward to estimate the likelihood of a particular negative example given that the user is searching for a certain image class. We assume that the likelihood with which Y will be used as a negative example given that the target is class i, is equal to the likelihood with which it will be used as a positive example given that the target is any other class. Denoting the use of Y as a negative example by y, this can be written as P(yiSi = 1) = P(yiSi = 0). (6) This assumption captures the intuition that a good negative example when searching for class i, is one that would be a good positive example if the user were looking for any class other than i. E.g. if class i is the only one in the database that does not contain regions of sky, using pieces of sky as negative examples will quickly eliminate the other images in the database. Under this assumption, negative examples can be incorporated into the learning by simply choosing the class i that maximizes the posterior odds ratio [4] between the hypotheses "class i is the target" and "class i is not the target" S* P(Si = lIXt",.,Xl,yt, ... ,Yl) P(Si = llxt, ... ,Xl) i = arg max = arg max ----O~--'---'--'----7 t P(Si=Olxt",.,XI,Yt, .. . ,YJ) t P(Si=OIYt, ... ,YI) where x are the positive and Y the negative examples, and we have assumed that, given the positive (negative) examples, the posterior probability of a given class being (not being) the target is independent of the negative (positive) examples. Once again, the procedure of the previous section can be used to obtain a recursive version of this equation and include a decay factor which penalizes ancient terms S* { 1 P(xtI Si=l) ( )1 P(Si=I IXl, ... ,Xt-l)} i = arg m~x a og + 1 - a og . t P(YtISi = 0) P(Si = 0IYl, ... , Yt-d Using equations (4) and (6) P(Si = 0IYI , ... ,yt) <X IT P(YkISi = 0) = IT P(YkISi = 1) k k <X P(Si=IIYl, ... ,Yt), we obtain S* { 1 P(XtISi = 1) (1 )1 P(Si = l lxl, ... ,Xt-l)} i = arg m~x a og + - a og _. t P(YtISi=l) P(Si=IIYl, ... ,Yt-d (7) While maximizing the ratio of posterior probabilities is a natural way to favor image classes that explain well the positive examples and poorly the negative ones, it tends to over-emphasize the importance of negative examples. In particular, any class with zero probability of generating the negative examples will lead to a ratio of 00, even if it explains very poorly the positive examples. To avoid this problem we proceed in two steps: • start by solving equation (5), i.e. sort the classes according to how well they explain the positive examples. • select the subset of the best N classes and solve equation (7) considering only the classes in this subset. 5 Experimental evaluation We performed experiments to evaluate 1) the accuracy of Bayesian retrieval on regionbased queries and 2) the improvement in retrieval performance achievable with relevance Leamingfrom User Feedback in Image Retrieval Systems 981 feedback. Because in a normal browsing scenario it is difficult to know the ground truth for the retrieval operation (at least without going through the tedious process of hand-labeling all images in the database), we relied instead on a controlled experimental set up for which ground truth is available. All experiments reported on this section are based on the widely used Brodatz texture database which contains images of 112 textures, each of them being represented by 9 different patches, in a total of 1008 images. These were split into two groups, a small one with 112 images (one example of each texture), and a larger one with the remaining 896. We call the first group the test database and the second the Brodatz database. A synthetic database with 2000 images was then created from the larger set by randomly selecting 4 images at a time and making a 2 x 2 tile out of them. Figure 1 b) and c) are two examples of these tiles. We call this set the tile database. 5.1 Region-based queries We performed two sets of experiments to evaluate the performance of region-based queries. In both cases the test database was used as a test set and the image features were the coefficients of the discrete cosine transform (DCT) of an 8 x 8 block-wise image decomposition over a grid containing every other image pixel. The first set of experiments was performed on the Brodatz database while the tile database was used in the second. A mixture of 16 Gaussians was estimated, using EM, for each of the images in the two databases. In both sets of experiments, each query consisted of selecting a few image blocks from an image in the test set, evaluating equation (2) for each of the classes and returning those that best explained the query. Performance was measured in terms of precision (percent of the retrieved images that are relevant to the query) and recall (percent of the relevant images that are retrieved) averaged over the entire test set. The query images contained a total of 256 non-overlapping blocks. The number of these that were used in each query varied between 1 (0.3 % of the image size) and 256 (100 %). Figure 2 depicts precision-recall plots as a function of this number. ., ==u~ ~\ ~,, .. .. .. 121_ ,M_ -- - ,.... 121_ 'M_ ,-... ~~~~~u~~ .. ~.~.~ .. ~~~ Figure 2: Precision-recall curves as a function of the number of feature vectors included in the query. Left: Brodatz database. Right: Tile database. The graph on the left is relative to the Brodatz database. Notice that precision is generally high even for large values of recall and the performance increases quickly with the percentage of feature vectors included in the query. In particular, 25% of the texture patch (64 blocks) is enough to achieve results very close to those obtained with all pixels. This shows that the retrieval criteria is robust to missing data. The graph on the left presents similar results for the tile database. While there is some loss in performance, this loss is not dramatic - a decrease between 10 and 15 % in precision for any given recall. In fact, the results are still good: when a reasonable number of feature vectors is included in the query, about 8.5 out of the 10 top retrieved images are, on average, relevant. Once again, performance improves rapidly with the number of feature vectors in the query and 25% of 982 N. Vasconcelos and A. Lippman the image is enough for results comparable to the best. This confirms the argument that Bayesian retrieval leads to effective region-based queries even for imagery composed by mUltiple visual stimulae. 5.2 Learning The performance of the learning algorithm was evaluated on the tile database. The goal was to determine if it is possible to reach a desired target image by starting from a weakly related one and providing positive and negative feedback to the retrieval system. This simulates the interaction between a real user and the CBIR system and is an iterative process, where each iteration consists of selecting a few examples, using them as queries for retrieval and examining the top M retrieved images to find examples for the next iteration. M should be small since most users are not willing to go through lots of false positives to find the next query. In all experiments we set M = 10 corresponding to one screenful of images. The most complex problem in testing is to determine a good strategy for selecting the examples to be given to the system. The closer this strategy is to what a real user would do, the higher the practical significance of the results. However, even when there is clear ground truth for the retrieval (as is the case of the tile database) it is not completely clear how to make the selection. While it is obvious that regions of texture classes that appear in the target should be used as positive feedback, it is much harder to determine automatically what are good negative examples. As Figure 1 d) illustrates, there are cases in which textures from two different classes are visually similar. Selecting images from one of these classes as a negative example for the other will be a disservice to the learner. While real users tend not to do this, it is hard to avoid such mistakes in an automatic setting, unless one does some sort of pre-classification of the database. Because we wanted to avoid such pre-classification we decided to stick with a simple selection procedure and live with these mistakes. At each step of the iteration, examples were selected in the following way: among the 10 top images returned by the retrieval system, the one with most patches from texture classes also present in the target image was selected to be the next query. One block from each patch in the query was then used as a positive (negative) example if the class of that patch was also (was not) represented in the target image. This strategy is a worst-case scenario. First, the learner might be confused by conflicting negative examples. Second, as seen above, better retrieval performance can be achieved if more than one block from each region is included in the queries. However, using only one block reduced the computational complexity of each iteration, allowing us to average results over several runs of the learning process. We performed 100 runs with randomly selected target images. In all cases, the initial query image was the first in the database containing one class in common with the target. The performance of the learning algorithm can be evaluated in various ways. We considered two metrics: the percentage of the runs which converged to the right target, and the number of iterations required for convergence. Because, to prevent the learner from entering loops, any given image could only be used once as a query, the algorithm can diverge in two ways. Strong divergence occurs when, at a given time step, the images (among the top 10) that can be used as queries do not contain any texture class in common with the target. In such situation, a real user will tend to feel that the retrieval system is incoherent and abort the search. Weak divergence occurs when all the top 10 images have previously been used. This is a less troublesome situation because the user could simply look up more images (e.g. the next 10) to get new examples. We start by analyzing the results obtained with positive feedback only. Figure 3 a) and b) present plots of the convergence rate and median number of iterations as a function of the decay factor Q. While when there is no learning (Q = I) only 43% of the runs converge, Learningfrom User Feedback in Image Retrieval Systems 983 the convergence rate is always higher when learning takes place and for a significant range of 0 (0 E [0.5,0.8]) it is above 60%. This not only confirms that learning can lead to significant improvements of retrieval performance but also shows that a precise selection of o is not crucial. Furthermore, when convergence occurs it is usually very fast, taking from 4 to 6 iterations. On the other hand, a significant percentage of runs do not converge and the majority of these are cases of strong divergence. As illustrated by Figure 3 c) and d), this percentage decreases significantly when both positive and negative examples are allowed. The rate of convergence is in this case usually between 80 and 90 % and strong divergence never occurs. And while the number of iterations for convergence increases, convergence is still fast (usually below 10 iterations). This is indeed the great advantage of negative examples: they encourage some exploration of the database which avoids local minima and leads to convergence. Notice that, when there is no learning, the convergence rate is high and learning can actually increase the rate of divergence. We believe that this is due to the inconsistencies associated with the negative example selection strategy. However, when convergence occurs, it is always faster if learning is employed. ," ~ ~ _. _ .. 0 -1,-, ==-1 -..... ......IT 1ft I ••• It .A • a) .... '0 " .ft '.1 ... I.e ... I c) ., .ft .......... 1M I b) I·.· ==-1 U UI d) Figure 3: Learning performance as a function of 0_ Left: Percent of runs which converged. Right: Median number of iterations. Top: positive examples. Bottom: positive and negative examples. References [1] S. Belongie, C. Carson, H. Greenspan, and J. Malik. Color-and texture-based image segmentation using EM and its application to content-based image retrieval. In International Conference on Computer Vision, pages 675-682, Bombay, India, 1998. [2] I. Cox, M. Miller, S. Omohundro, and P. Yianilos. Pic Hunter: Bayesian Relevance Feedback for Image Retrieval. In Int. Con! on Pattern Recognition, Vienna, Austria, 1996. [3] L. Devroye, L. Gyorfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. SpringerVerlag, 1996. [4] A. Gelman, J. Carlin, H. Stem, and D. Rubin. Bayesian Data Analysis. Chapman Hall, 1995. [5] A. Pentland, R. Picard, and S. Sclaroff. Photobook: Content-based Manipulation of Image Databases. International Journal of Computer Vision, Vol. 18(3):233-254, June 1996. PART IX CONTROL, NAVIGATION AND PLANNING
|
1999
|
61
|
1,711
|
Online Independent Component Analysis With Local Learning Rate Adaptation Nicol N. Schraudolph nic<Didsia.ch Xavier Giannakopoulos xavier<Didsia.ch IDSIA, Corso Elvezia 36 6900 Lugano, Switzerland http://www.idsia.ch/ Abstract Stochastic meta-descent (SMD) is a new technique for online adaptation of local learning rates in arbitrary twice-differentiable systems. Like matrix momentum it uses full second-order information while retaining O(n) computational complexity by exploiting the efficient computation of Hessian-vector products. Here we apply SMD to independent component analysis, and employ the resulting algorithm for the blind separation of time-varying mixtures. By matching individual learning rates to the rate of change in each source signal's mixture coefficients, our technique is capable of simultaneously tracking sources that move at very different, a priori unknown speeds. 1 Introduction Independent component analysis (ICA) methods are typically run in batch mode in order to keep the stochasticity of the empirical gradient low. Often this is combined with a global learning rate annealing scheme that negotiates the tradeoff between fast convergence and good asymptotic performance. For time-varying mixtures, this must be replaced by a learning rate adaptation scheme. Adaptation of a single, global learning rate, however, facilitates the tracking only of sources whose mixing coefficients change at comparable rates [1], resp. switch all at the same time [2]. In cases where some sources move much faster than others, or switch at different times, individual weights in the unmixing matrix must adapt at different rates in order to achieve good performance. We apply stochastic meta-descent (SMD), a new online adaptation method for local learning rates [3, 4], to an extended Bell-Sejnowski ICA algorithm [5] with natural gradient [6] and kurtosis estimation [7] modifications. The resulting algorithm is capable of separating and tracking a time-varying mixture of 10 sources whose unknown mixing coefficients change at different rates. 790 N. N. Schraudo!ph and X Giannakopou!os 2 The SMD Algorithm Given a sequence XQ, Xl, ... of data points, we minimize the expected value of a twice-differentiable loss function fw(x) with respect to its parameters W by stochastic gradient descent: Wt+l = Wt + Pt· 8t , where It (1) and . denotes component-wise multiplication. The local learning rates P are best adapted by exponentiated gradient descent [8, 9], so that they can cover a wide dynamic range while staying strictly positive: lnpt 1 ... a fwt (xt) npt-l I-t alnp Pt-l . exp(1-t It . Vt) , where Vt (2) and I-t is a global meta-learning rate. This approach rests on the assumption that each element of P affects f w( x) only through the corresponding element of w. With considerable variation, (2) forms the basis of most local rate adaptation methods found in the literature. In order to avoid an expensive exponentiation [10] for each weight update, we typically use the linearization etL ~ 1 + u, valid for small luI, giving (3) where we constrain the multiplier to be at least (typically) (} = 0.1 as a safeguard against unreasonably small or negative values. For the meta-level gradient descent to be stable, I-t must in any case be chosen such that the multiplier for P does not stray far from unity; under these conditions we find the linear approximation (3) quite sufficient. Definition of v. The gradient trace v should accurately measure the effect that a change in local learning rate has on the corresponding weight. It is tempting to consider only the immediate effect of a change in Pt on Wt+l: declaring Wt and 8t in (1) to be independent of Pt, one then quickly arrives at ... _ aWt+l ... ~ Vt+l = ~l ... = Pt· Ut u npt (4) However, this common approach [11, 12, 13, 14, 15] fails to take into account the incremental nature of gradient descent: a change in P affects not only the current update of W, but also future ones. Some authors account for this by setting v to an exponential average of past gradients [2, 11, 16]; we found empirically that the method of Almeida et al. [15] can indeed be improved by this approach [3]. While such averaging serves to reduce the stochasticity of the product It ·It-l implied by (3) and (4), the average remains one of immediate, single-step effects. By contrast, Sutton [17, 18] models the long-term effect of P on future weight updates in a linear system by carrying the relevant partials forward through time, as is done in real-time recurrent learning [19]. This results in an iterative update rule for v, which we have extended to nonlinear systems [3, 4]. We define vas an Online leA with Local Rate Adaptation 791 exponential average of the effect of all past changes in p on the current weights: ... ( A) ~ d OWt+1 Vt+ 1 = 1 L.J 1\ 0 1 ... i=O . npt-i (5) The forgetting factor 0 ~ A ~ 1 is a free parameter of the algorithm. Inserting (1) into (5) gives (6) where H t denotes the instantaneous Hessian of fw(i!) at time t. The approximation in (6) assumes that (Vi> 0) oPt!OPt-i = 0; this signifies a certain dependence on an appropriate choice of meta-learning rate p.. Note that there is an efficient O(n) algorithm to calculate HtVt without ever having to compute or store the matrix H t itself [20]; we shall elaborate on this technique for the case of independent component analysis below. Meta-level conditioning. The gradient descent in P at the meta-level (2) may of course suffer from ill-conditioning just like the descent in W at the main level (1); the meta-descent in fact squares the condition number when v is defined as the previous gradient, or an exponential average of past gradients. Special measures to improve conditioning are thus required to make meta-descent work in non-trivial systems. Many researchers [11, 12, 13, 14] use the sign function to radically normalize the p-update. Unfortunately such a nonlinearity does not preserve the zero-mean property that characterizes stochastic gradients in equilibrium -- in particular, it will translate any skew in the equilibrium distribution into a non-zero mean change in p. This causes convergence to non-optimal step sizes, and renders such methods unsuitable for online learning. Notably, Almeida et al. [15] avoid this pitfall by using a running estimate of the gradient's stochastic variance as their meta-normalizer. In addition to modeling the long-term effect of a change in local learning rate, our iterative gradient trace serves as a highly effective conditioner for the meta-descent: the fixpoint of (6) is given by Vt = [AHt + (I-A) diag(I/Pi)]-llt (7) a modified Newton step, which for typical values of A (i. e., close to 1) scales with the inverse of the gradient. Consequently, we can expect the product It . Vt in (2) to be a very well-conditioned quantity. Experiments with feedforward multi-layer perceptrons [3, 4] have confirmed that SMD does not require explicit meta-level normalization, and converges faster than alternative methods. 3 Application to leA We now apply the SMD technique to independent component analysis, using the Bell-Sejnowski algorithm [5] as our base method. The goal is to find an unmixing 792 N. N. Schraudolph and X Giannakopoulos matrix Wt which up to scaling and permutation provides a good linear estimate Vt == WtXt of the independent sources St present in a given mixture signal Xt· The mixture is generated linearly according to Xt = Atst, where At is an unknown (and unobservable) full rank matrix. We include the well-known natural gradient [6] and kurtosis estimation [7] modifications to the basic algorithm, as well as a matrix Pt of local learning rates. The resulting online update for the weight matrix Wt is (8) where the gradient D t is given by Dt == 8f;;~~t) = ([Vt ± tanh(Vt)] vt - 1) Wt , (9) with the sign for each component of the tanh(Vt) term depending on its current kurtosis estimate. Following Pearlmutter [20], we now define the differentiation operator 8g(W~r+ rVt) Ir=o RVt (g(Wt» == u (10) which describes the effect on 9 of a perturbation of the weights in the direction of Vt. We can use RVt to efficiently calculate the Hessian-vector product (11) where "vee" is the operator that concatenates all columns of a matrix into a single column vector. Since Rv, is a linear operator, we have Rv,(Wt) RVt (Vt) RVt (tanh(Vd) Vt, Rv, (WtXt) = VtXt, diag( tanh' (Vt») VtXt , and so forth (cf. [20]). Starting from (9), we apply the RVt operator to obtain Ht*Vt Rv,[([Vt ± tanh(Vt)] ytT - 1) Wt] ([Vt ± tanh(Vt)] vt - 1) Vt + RVt([ Yt ± tanh(Vt)] vt - 1) Wt (12) (13) (14) ([ Vt ± tanh(Vt)] vt - 1) Vt + (15) [(1 ± diag[tanh'(Vt)]) VtXt vt + [Vt ± tanh(Vd](Vtxt)T] Wt In conjunction with the matrix versions of our learning rate update (3) (16) and gradient trace (6) (17) this constitutes our SMD-ICA algorithm. Online leA with Local Rate Adaptation 793 4 Experiment The algorithm was tested on an artificial problem where 10 sources follow elliptic trajectories according to Xt = (Abase + Al sin(wt) + A2 cos(wt)) St (18) where Abase is a normally distributed mixing matrix, as well as Al and A2, whose columns represent the axes of the ellipses on which the sources travel. The velocities ware normally distributed around a mean of one revolution for every 6000 data samples. All sources are supergaussian. The ICA-SMD algorithm was implemented with only online access to the data, including on-line whitening [21]. Whenever the condition number of the estimated whitening matrix exceeded a large threshold (set to 350 here), updates (16) and (17) were disabled to prevent the algorithm from diverging. Other parameters settings were It = 0.1, >. = 0.999, and p = 0.2. Results that were not separating the 10 sources without ambiguity were discarded. Figure 1 shows the performance index from [6] (the lower the better, zero being the ideal case) along with the condition number of the mixing matrix, showing that the algorithm is robust to a temporary confusion in the separation. The ordinate represents 3000 data samples, divided into mini-batches of 10 each for efficiency. Figure 2 shows the match between an actual mixing column and its estimate, in the subspace spanned by the elliptic trajectory. The singularity occurring halfway through is not damaging performance. Globally the algorithm remains stable as long as degenerate inputs are handled correctly. 5 Conclusions Once SMD-ICA has found a separating solution, we find it possible to simultaneously track ten sources that move independently at very different, a priori unknown OOr-------~------T_------~------,_------~------~ 50 40 30 Error index cond(A)120 ---+--Figure 1: Global view of the quality of separation 794 N. N. Schraudolph and X Giannakopoulos Or---.---.,~--,----.----~---.----.----.----r---. Estimation error -, -2 -3 -4 -5 -6~--~--~~--~--~----~--~----~---L----~--~ -2.5 -2 -'.5 -, -0.5 o 0.5 '.5 2 2.5 Figure 2: Projection of a column from the mixing matrix. Arrows link the exact point with its estimate; the trajectory proceeds from lower right to upper left. speeds. To continue tracking over extended periods it is necessary to handle momentary singularities, through online estimation of the number of sources or some other heuristic solution. SMD's adaptation of local learning rates can then facilitate continuous, online use of ICA in rapidly changing environments. Acknowledgments This work was supported by the Swiss National Science Foundation under grants number 2000-052678.97/1 and 2100-054093.98. References [1] J. Karhunen and P. Pajunen, "Blind source separation and tracking using nonlinear PCA criterion: A least-squares approach", in Proc. IEEE Int. Conf. on Neural Networks, Houston, Texas, 1997, pp. 2147- 2152. [2] N. Murata, K.-R. Milller, A. Ziehe, and S.-i. Amari, "Adaptive on-line learning in changing environments", in Advances in Neural Information Processing Systems, M. C. Mozer, M. I. Jordan, and T . Petsche, Eds. 1997, vol. 9, pp. 599- 605, The MIT Press, Cambridge, MA. [3] N. N. Schraudolph, "Local gain adaptation in stochastic gradient descent", in Proceedings of the 9th International Conference on Artificial Neural Networks, Edinburgh, Scotland, 1999, pp. 569-574, lEE, London, ftp://ftp.idsia.ch/ pub/nic/smd.ps.gz. [4] N. N. Schraudolph, "Online learning with adaptive local step sizes", in Neural Nets WIRN Vietri-99; Proceedings of the 11th Italian Workshop on Neural Nets, M. Marinaro and R. Tagliaferri, Eds., Vietri suI Mare, Salerno, Italy, 1999, Perspectives in Neural Computing, pp. 151-156, Springer Verlag, Berlin. Online leA with Local Rate Adaptation 795 [5] A. J. Bell and T. J. Sejnowski, "An information-maximization approach to blind separation and blind deconvolution", Neural Computation, 7(6):11291159,1995. [6] S.-i. Amari, A. Cichocki, and H. H. Yang, "A new learning algorithm for blind signal separation", in Advances in Neural Information Processing Systems, D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo, Eds. 1996, vol. 8, pp. 757-763, The MIT Press, Cambridge, MA. [7] M. Girolami and C. Fyfe, "Generalised independent component analysis through unsupervised learning with emergent bussgang properties", in Proc. IEEE Int. Conf. on Neural Networks, Houston, Texas, 1997, pp. 1788-179l. [8] J. Kivinen and M. K. Warmuth, "Exponentiated gradient verSus gradient descent for linear predictors", Tech. Rep. UCSC-CRL-94-16, University of California, Santa Cruz, June 1994. [9] J. Kivinen and M. K. Warmuth, "Additive versus exponentiated gradient updates for linear prediction", in Proc. 27th Annual ACM Symposium on Theory of Computing, New York, NY, May 1995, pp. 209-218, The Association for Computing Machinery. [10] N. N. Schraudolph, "A fast, compact approximation of the exponential function", Neural Computation, 11(4):853-862, 1999. [11] R. Jacobs, "Increased rates of convergence through learning rate adaptation", Neural Networks, 1:295- 307, 1988. [12] T. Tollenaere, "SuperSAB: fast adaptive back propagation with good scaling properties", Neural Networks, 3:561-573, 1990. [13] F. M. Silva and L. B. Almeida, "Speeding up back-propagation", in Advanced Neural Computers, R. Eckmiller, Ed., Amsterdam, 1990, pp. 151-158, Elsevier. [14] M. Riedmiller and H. Braun, "A direct adaptive method for faster backpropagation learning: The RPROP algorithm", in Proc. International Conference on Neural Networks, San Francisco, CA, 1993, pp. 586-591, IEEE, New York. [15] L. B. Almeida, T. Langlois, J. D. Amaral, and A. Plakhov, "Parameter adaptation in stochastic optimization", in On-Line Learning in Neural Networks, D. Saad, Ed., Publications of the Newton Institute, chapter 6. Cambridge University Press, 1999, ftp://146.193. 2 . 131/pub/lba/papers/adsteps . ps .gz. [16] M. E. Harmon and L. C. Baird III, "Multi-player residual advantage learning with general function approximation" , Tech. Rep. WL-TR-1065, Wright Laboratory, WL/ AACF, 2241 Avionics Circle, Wright-Patterson Air Force Base, OH 45433-7308, 1996, http://vvv.leemon.com/papers/sim_tech/sim_tech.ps.gz. [17] R. S. Sutton, "Adapting bias by gradient descent: an incremental version of delta-bar-delta", in Proc. 10th National Conference on Artificial Intelligence. 1992, pp. 171-176, The MIT Press, Cambridge, MA, ftp://ftp.cs.umass.edu/ pub/anv/pub/sutton/sutton-92a.ps.gz. [18] R. S. Sutton, "Gain adaptation beats least squares?", in Proc. 7th Yale Workshop on Adaptive and Learning Systems, 1992, pp. 161-166, ftp://ftp.cs. umass.edu/pub/anv/pub/sutton/sutton-92b.ps.gz. [19] R. Williams and D. Zipser, "A learning algorithm for continually running fully recurrent neural networks", Neural Computation, 1:270-280, 1989. [20] B. A. Pearlmutter, "Fast exact multiplication by the Hessian", Neural Computation, 6(1):147-160,1994. [21] J. Karhunen, E. Oja, L. Wang, R. Vigario, and J. Joutsensalo, "A class of neural networks for independent component analysis", IEEE Trans. on Neural Networks, 8(3):486-504, 1997.
|
1999
|
62
|
1,712
|
A Variational Bayesian Framework for Graphical Models Hagai Attias hagai@gatsby.ucl.ac.uk Gatsby Unit, University College London 17 Queen Square London WC1N 3AR, U.K. Abstract This paper presents a novel practical framework for Bayesian model averaging and model selection in probabilistic graphical models. Our approach approximates full posterior distributions over model parameters and structures, as well as latent variables, in an analytical manner. These posteriors fall out of a free-form optimization procedure, which naturally incorporates conjugate priors. Unlike in large sample approximations, the posteriors are generally nonGaussian and no Hessian needs to be computed. Predictive quantities are obtained analytically. The resulting algorithm generalizes the standard Expectation Maximization algorithm, and its convergence is guaranteed. We demonstrate that this approach can be applied to a large class of models in several domains, including mixture models and source separation. 1 Introduction A standard method to learn a graphical model 1 from data is maximum likelihood (ML). Given a training dataset, ML estimates a single optimal value for the model parameters within a fixed graph structure. However, ML is well known for its tendency to overfit the data. Overfitting becomes more severe for complex models involving high-dimensional real-world data such as images, speech, and text. Another problem is that ML prefers complex models, since they have more parameters and fit the data better. Hence, ML cannot optimize model structure. The Bayesian framework provides, in principle, a solution to these problems. Rather than focusing on a single model, a Bayesian considers a whole (finite or infinite) class of models. For each model, its posterior probability given the dataset is computed. Predictions for test data are made by averaging the predictions of all the individual models, weighted by their posteriors. Thus, the Bayesian framework avoids overfitting by integrating out the parameters. In addition, complex models are automatically penalized by being assigned a lower posterior probability, therefore optimal structures can be identified. Unfortunately, computations in the Bayesian framework are intractable even for lWe use the term 'model' to refer collectively to parameters and structure. 210 H. Attias very simple cases (e.g. factor analysis; see [2]). Most existing approximation methods fall into two classes [3]: Markov chain Monte Carlo methods and large sample methods (e.g., Laplace approximation). MCMC methods attempt to achieve exact results but typically require vast computational resources, and become impractical for complex models in high data dimensions. Large sample methods are tractable, but typically make a drastic approximation by modeling the 'posteriors over all parameters as Normal, even for parameters that are not positive definite (e.g., covariance matrices). In addition, they require the computation ofthe Hessian, which may become quite intensive. In this paper I present Variational Bayes (VB), a practical framework for Bayesian computations in graphical models. VB draws together variational ideas from intractable latent variables models [8] and from Bayesian inference [4,5,9], which, in turn, draw on the work of [6]. This framework facilitates analytical calculations of posterior distributions over the hidden variables, parameters and structures. The posteriors fall out of a free-form optimization procedure which naturally incorporates conjugate priors, and emerge in standard forms, only one of which is Normal. They are computed via an iterative algorithm that is closely related to Expectation Maximization (EM) and whose convergence is guaranteed. No Hessian needs to be computed. In addition, averaging over models to compute predictive quantities can be performed analytically. Model selection is done using the posterior over structure; in particular, the BIC/MDL criteria emerge as a limiting case. 2 General Framework We restrict our attention in this paper to directed acyclic graphs (DAGs, a.k.a. Bayesian networks). Let Y = {y., ... ,YN} denote the visible (data) nodes, where n = 1, ... , N runs over the data instances, and let X = {Xl, ... , XN} denote the hidden nodes. Let e denote the parameters, which are simply additional hidden nodes with their own distributions. A model with a fixed structure m is fully defined by the joint distribution p(Y, X, elm). In a DAG, this joint factorizes over the nodes, i.e. p(Y,X I e,m) = TIiP(Ui I pai,Oi,m), where Ui E YUX, pai is the set of parents of Ui, and Oi E e parametrize the edges directed toward Ui. In addition, we usually assume independent instances, p(Y, X Ie, m) = TIn p(y n, Xn Ie, m). We shall also consider a set of structures m E M, where m controls the number of hidden nodes and the functional forms of the dependencies p( Ui I pai, 0 i, m), including the range of values assumed by each node (e.g., the number of components in a mixture model). Associated with the set of structures is a structure prior p( m). Marginal likelihood and posterior over parameters. For a fixed structure m, we are interested in two quantities. The first is the parameter posterior distribution p(e I Y,m). The second is the marginal likelihood p(Y I m), also known as the evidence assigned to structure m by the data. In the following, the reference to m is usually omitted but is always implied. Both quantities are obtained from the joint p(Y, X, elm). For models with no hidden nodes the required computations can often be performed analytically. However, in the presence of hidden nodes, these quantities become computationally intractable. We shall approximate them using a variational approach as follows. Consider the joint posterior p(X, elY) over hidden nodes and parameters. Since it is intractable, consider a variational posterior q(X, elY), which is restricted to the factorized form q(X, elY) = q(X I Y)q(e I Y) , (1) wher"e given the data, the parameters and hidden nodes are independent. This A Variational Baysian Frameworkfor Graphical Models 211 restriction is the key: It makes q approximate but tractable. Notice that we do not require complete factorization, as the parameters and hidden nodes may still be correlated amongst themselves. We compute q by optimizing a cost function Fm[q] defined by Fm[q] = ! dE> q(X)q(E» log ~~i~(:j ~ logp(Y I m) , (2) where the inequality holds for an arbitrary q and follows from Jensen's inequality (see [6]); it becomes an equality when q is the true posterior. Note that q is always understood to include conditioning on Y as in (1). Since Fm is bounded from above by the marginal likelihood, we can obtain the optimal posteriors by maximizing it w.r.t. q. This can be shown to be equivalent to minimizing the KL distance between q and the true posterior. Thus, optimizing Fm produces the best approximation to the true posterior within the space of distributions satisfying (1), as well as the tightest lower bound on the true marginal likelihood. Penalizing complex models. To see that the VB objective function Fm penalizes complexity, it is useful to rewrite it as p(Y, X I E» Fm = (log q(X) )X,9 - KL[q(E» II p(E»] , (3) where the average in the first term on the r.h.s. is taken w.r.t. q(X, E». The first term corresponds to the (averaged) likelihood. The second term is the KL distance between the prior and posterior over the parameters. As the number of parameters increases, the KL distance follows and consequently reduces Fm. This penalized likelihood interpretation becomes transparent in the large sample limit N -7 00, where the parameter posterior is sharply peaked about the most probable value E> = E>o. It can then be shown that the KL penalty reduces to (I E>o 1/2) log N, which is linear in the number of parameters I E>o I of structure m. Fm then corresponds precisely the Bayesian information criterion (BIC) and the minimum description length criterion (MDL) (see [3]). Thus, these popular model selection criteria follow as a limiting case of the VB framework. Free-form optimization and an EM-like algorithm. Rather than assuming a specific parametric form for the posteriors, we let them fall out of free-form optimization of the VB objective function. This results in an iterative algorithm directly analogous to ordinary EM. In the E-step, we compute the posterior over the hidden nodes by solving 8Fm/8q(X) = 0 to get q(X) ex e(log p(Y,XI9»e , where the average is taken w.r.t. q(E». (4) In the M-step, rather than the 'optimal' parameters, we compute the posterior distribution over the parameters by solving 8Fm/8q(E» = 0 to get q(E» ex e(IOgp(y,XI9»xp (E» , (5) where the average is taken w.r.t. q(X). This is where the concept of conjugate priors becomes useful. Denoting the exponential term on the r.h.s. of (5) by f(E», we choose the prior p(E» from a family of distributions such that q(E» ex f(E»p(E» belongs to that same family. p(E» is then said to be conjugate to f(E». This procedure allows us to select a prior from a fairly large family of distributions (which includes non-informative ones as limiting cases) 212 H. Attias and thus not compromise generality, while facilitating mathematical simplicity and elegance. In particular, learning in the VB framework simply amounts to updating the hyperparameters, i.e., transforming the prior parameters to the posterior parameters. We point out that, while the use of conjugate priors is widespread in statistics, so far they could only be applied to models where all nodes were visible. Structure posterior. To compute q(m) we exploit Jensen's inequality once again to define a more general objective function, .1'[q] l:mEM q(m) [.1'm + logp(m)jq(m)] ~ 10gp(Y), where now q = q(X I m, Y)q(8 I m, Y)q( m I Y) . After computing .1' m for each m EM, the structure posterior is obtained by free-form optimization of .1': q(m) ex e:Frnp(m) . (6) Hence, prior assumptions about the likelihood of different structures, encoded by the prior p( m), affect the selection of optimal model structures performed according to q( m), as they should. Predictive quantities. The ultimate goal of Bayesian inference is to estimate predictive quantities, such as a density or regression function. Generally, these quantities are computed by averaging over all models, weighting each model by its posterior. In the VB framework, exact model averaging is approximated by replacing the true posterior p(8 I Y) by the variational q(8 I Y). In density estimation, for example, the density assigned to a new data point Y is given by p(y I Y) = J d8 p(y I 8) q(8 I Y) . In some situations (e.g. source separation), an estimate of hidden node values x from new data y may be required. The relevant quantity here is the conditional p(x I y, Y), from which the most likely value of hidden nodes is extracted. VB approximates it by p(x I y, Y) ex J d8 p(y, x I 8) q(8 I Y). 3 Variational Bayes Mixture Models Mixture models have been investigated and analyzed extensively over many years. However, the well known problems of regularizing against likelihood divergences and of determining the required number of mixture components are still open. Whereas in theory the Bayesian approach provides a solution, no satisfactory practical algorithm has emerged from the application of involved sampling techniques (e.g., [7]) and approximation methods [3] to this problem. We now present the solution provided by VB. We consider models of the form m P(Yn I 8,m) = LP(Yn I Sn = s,8) p(sn = s I 8), (7) s=1 where Yn denotes the nth observed data vector, and Sn denotes the hidden component that generated it. The components are labeled by s = 1, ... , m, with the structure parameter m denoting the number of components. Whereas our approach can be applied to arbitrary models, for simplicity we consider here Normal component distributions, P(Yn I Sn = s, 8) = N(JJ.s' r 8), where I-Ls is the mean and r 8 the precision (inverse covariance) matrix. The mixing proportions are P(Sn = S I 8) = 'Trs' In hindsight, we use conjugate priors on the parameters 8 = {'Trs, I-Ls' rs}. The mixing proportions are jointly Dirichlet, p( {'Trs}) = V(..~O), the means (conditioned on the preCisions) are Normal, p(l-Ls Irs) = N(pO, f30r s), and the precisions are Wishart, p(r s) = W(vO, ~O). We find that the parameter posterior for a fixed m A Variational Baysian Framework/or Graphical Models 213 factorizes into q(8) = q({1I"s})flsq(J.£s,rs). The posteriors are obtained by the following iterative algorithm, termed VB-MOG. E-step. Compute the responsibilities for instance n using (4): ,: == q(sn = S I Yn) ex ffs r!/2 e-(Yn-P,)Tr,(Yn-P,)/2 e-d / 2f3• , (8) noting that here X = S and q(S) = TIn q(Sn). This expression resembles the responsibilities in ordinary MLj the differences stem from integrating out the parameters. The special quantities in (8) are logffs == (log1l"s) = 1/1()..s) -1/1CLJs' )..s,), d logrs == (log I rs I) = Li=l1/1(lIs + 1 - i)/2) - log 1 ~s 1 +dlog2, and i\ == (r s) = IIs~;l, where 1/1(x) = dlog r(x)/dx is the digamma function, and the averages (-} are taken w.r.t. q(8). The other parameters are described below. M-step. Compute the parameter posterior in two stages. First, compute the quantities 1 N '" n 1I"s = N L..J's , n=l 1 N '" n J.£s = N L..J's Yn , s n=1 N ~ 1 '" n en Lis = N L..J's S' s n=l (9) where C~ = (Yn - f.ts)(Yn - f.ts)T and fls = N7rs. This stage is identical to the M-step in ordinary EM where it produces the new parameters. In VB, however, the quantities in (9) only help characterize the new parameter posteriors. These posteriors are functionally identical to the priors but have different parameter values. The mixing proportions are jointly Dirichlet, q( {11" s}) = D( {)..s}), the means are Normal, q(J..ts Irs) = N(ps' /3srs), and the precisions are Wishart, p(rs) = W(lIs, ~s). The posterior parameters are updated in the second stage, using the simple rules )..s fls +)..0, Ps = (flsf.ts + /3opO)/(Ns +~) , /3s = fls + /30 , (10) 0 0 0aT a ° lis = Ns + II, ~s = NsEs + Ns/3 (J.£s - P )(I's - P ) /(Ns + f3 ) + ~ . The final values of the posterior parameters form the output of the VB-MOG. We remark that (a) Whereas no specific assumptions have been made about them, the parameter posteriors emerge in suitable, non-trivial (and generally non-Normal) functional forms. (b) The computational overhead of the VB-MOG compared to EM is minimal. (c) The covariance of the parameter posterior is O(l/N), and VBMOG reduces to EM (regularized by the priors) as N ~ 00. (d) VB-MOG has no divergence problems. (e) Stability is guaranteed by the existence of an objective function. (f) Finally, the approximate marginal likelihood Fm , required to optimize the number of components via (6), can also be obtained in closed form (omitted). Predictive Density. Using our posteriors, we can integrate out the parameters and show that the density assigned by the model to a new data vector Y is a mixture of Student-t distributions, m (11) s=1 where component S has Ws = lis + 1 - d d.o.f., mean Ps' covariance As = «/3s + 1)/ /3sws)~s, and proportion 7rs = )..s/ Ls' )..s" (11) reduces to a MOG as N ~ 00. Nonlinear Regression. We may divide each data vector into input and output parts, Y = (yi,y o ), and use the model to estimate the regression function yO = f(yi) and error spheres. These may be extracted from the conditional p(yO I yt, Y) = L:n=l Ws tw~ (yO I p~, A~), which also turns out to be a mixture of Student-t distributions, with means p~ being linear, and covariances A~ and mixing proportions Ws nonlinear, in yi, and given in terms of the posterior parameters. 214 Buffalo post offIce digits Misclasslflcation rate histogram 1,--------------, 0.8 0.6 ( 0 .4 _1 ~ , , , , 0 .2 , , , , - , , 0 0 0 .05 0 . 1 Figure 1: VB-MOG applied to handwritten digit recognition. H Attias VB-MOG was applied to the Boston housing dataset (UCI machine learning repository), where 13 inputs are used to predict the single output, a house's price. 100 random divisions of the N = 506 dataset into 481 training and 25 test points were used, resulting in an average MSE of 11.9. Whereas ours is not a discriminative method, it was nevertheless competitive with Breiman's (1994) bagging technique using regression trees (MSE=11.7). For comparison, EM achieved MSE=14.6. Classification. Here, a separate parameter posterior is computed for each class c from a training dataset yc. Test data vector y is then classified according to the conditional p(c I y, {yC}), which has a form identical to (11) (with c-dependent parameters) multiplied by the relative size of yc. VB-MOG was applied to the Buffalo post office dataset, which contains 1100 examples for each digit 0 - 9. Each digit is a gray-level 8 x 8 pixel array (see examples in Fig. 1 (left)). We used 10 random 500-digit batches for training, and a separate batch of 200 for testing. An average misclassification rate of .018 was obtained using m = 30 components; EM achieved .025. The misclassification histograms (VB=solid, EM=dashed) are shown in Fig. 1 (right). 4 VB and Intractable Models: a Blind Separation Example The discussion so far assumed that a free-form optimization of the VB objective function is feasible. Unfortunately, for many interesting models, in particular models where ordinary ML is intractable, this is not the case. For such models, we modify the VB procedure as follows: (a) Specify a parametric functional form for the posterior over the hidden nodes q(X) , and optimize w.r.t. its parameters, in the spirit of [8J. (b) Let the parameter posterior q(8) fall out of free-form optimization, as before. We illustrate this approach in the context of the blind source separation (BSS) problem (see, e.g., [1]). This problem is described by Yn = HXn + Un , where Xn is an unobserved m-dim source vector at instance n, H is an unknown mixing matrix, and the noise Un is Normally distributed with an unknown precision >'1. The task is to construct a source estimate xn from the observed d-dim data y. The sources are independent and non-Normally distributed. Here we assume the high-kurtosis distribution p(xi) ex: cosh-\xf /2) , which is appropriate for modeling speech sources. One important but heretofore unresolved problem in BSS is determining the number m of sources from data. Another is to avoid overfitting the mixing matrix. Both problems, typical to ML algorithms, can be remedied using VB. It is the non-Normal nature ofthe sources that renders the source posterior p(X I Y) intractable even before a Bayesian treatment. We use a Normal variational posterior q(X) = TIn N(xn lPn' r n) with instance-dependent mean and precision. The mixing matrix posterior q(H) then emerges as Normal. For simplicity, >. is optimized rather than integrated out. The reSUlting VB-BSS algorithm runs as follows: A Variational Baysian Framework for Graphical Models 215 o -1000 -2000 -3000 -4000 2 4 log PrIm) source reconstruction errOr 6 8 10 12 m o ' -5 -10 -15 -roO~--~5~--~1-0--~ 15 SNR(dB) Figure 2: Application of VB to blind source separation algorithm (see text). E-step. Optimize the variational mean Pn by iterating to convergence, for each n, the fixed-point equation XfI:T(Yn - HPn) - tanhpn/2 = C- I Pn, where C is the source covariance conditioned on the data. The variational precision matrix turns out to be n-independent: r n = A. T AA. + 1/2 + C-I . M-step. Update the mean and precision of the posterior q(H) (rules omitted). This algorithm was applied to ll-dim data generated by linearly mixing 5 lOOmseclong speech and music signals obtained from commercial CDs. Gaussian noise were added at different SNR levels. A uniform structure prior p( m) = 1/ K for m ~ K was used. The resulting posterior over the number of sources (Fig. 2 (left)) is peaked at the correct value m = 5. The sources were then reconstructed from test data via p(x I y, Y). The log reconstruction error is plotted vs. SNR in Fig. 2 (right, solid). The ML error (which includes no model averaging) is also shown (dashed) and is larger, reflecting overfitting. 5 Conclusion The VB framework is applicable to a large class of graphical models. In fact, it may be integrated with the junction tree algorithm to produce general inference engines with minimal overhead compared to ML ones. Dirichlet, Normal and Wishart posteriors are not special to models treated here but emerge as a general feature. Current research efforts include applications to multinomial models and to learning the structure of complex dynamic probabilistic networks. Acknowledgements I thank Matt Beal, Peter Dayan, David Mackay, Carl Rasmussen, and especially Zoubin Ghahramani, for important discussions. References [1) Attias, H. (1999). Independent Factor Analysis. Neural Computation 11, 803-85l. [2) Bishop, C.M. (1999). Variational Principal Component Analysis. Proc. 9th ICANN. [3) Chickering, D.M. & Heckerman, D. (1997). Efficient approximations for the marginal likelihood of Bayesian networks with hidden variables. Machine Learning 29, 181-212. [4) Hinton, G.E. & Van Camp, D. (1993). Keeping neural networks simple by minimizing the description length of the weights. Proc. 6th COLT, 5-13. [5) Jaakkola, T. & Jordan, M.L (1997). Bayesian logistic regression: A variational approach. Statistics and Artificial Intelligence 6 (Smyth, P. & Madigan, D., Eds). [6) Neal, R.M. & Hinton, G.E. (1998). A view of the EM algorithm that justifies incremental, sparse, and other variants. Learning in Graphical Models, 355-368 (Jordan, M.L, Ed). Kluwer Academic Press, Norwell, MA. [7) Richardson, S. & Green, P.J. (1997). On Bayesian analysis of mixtures with an unknown number of components. Journal of the Royal Statistical Society B, 59, 731-792. [8) Saul, L.K., Jaakkola, T., & Jordan, M.I. (1996). Mean field theory of sigmoid belief networks. Journal of Artificial Intelligence Research 4, 61-76. [9) Waterhouse, S., Mackay, D., & Robinson, T. (1996). Bayesian methods for mixture of experts. NIPS-8 (Touretzky, D.S. et aI, Eds). MIT Press.
|
1999
|
63
|
1,713
|
Algebraic Analysis for Non-Regular Learning Machines Sumio Watanabe Precision and Intelligence Laboratory Tokyo Institute of Technology 4259 Nagatsuta, Midori-ku, Yokohama 223 Japan swatanab@pi. titech. ac.jp Abstract Hierarchical learning machines are non-regular and non-identifiable statistical models, whose true parameter sets are analytic sets with singularities. Using algebraic analysis, we rigorously prove that the stochastic complexity of a non-identifiable learning machine is asymptotically equal to >'1 log n (ml 1) log log n + const., where n is the number of training samples. Moreover we show that the rational number >'1 and the integer ml can be algorithmically calculated using resolution of singularities in algebraic geometry. Also we obtain inequalities 0 < >'1 ~ d/2 and 1 ~ ml ~ d, where d is the number of parameters. 1 Introduction Hierarchical learning machines such as multi-layer perceptrons, radial basis functions, and normal mixtures are non-regular and non-identifiable learning machines. If the true distribution is almost contained in a learning model, then the set of true parameters is not one point but an analytic variety [4][9][3][10]. This paper establishes the mathematical foundation to analyze such learning machines based on algebraic analysis and algebraic geometry. Let us consider a learning machine represented by a conditional probability density p(xlw) where x is an M dimensional vector and w is a d dimensional parameter. We assume that n training samples xn = {Xi; i = 1,2, ... , n} are independently taken from the true probability distribution q(x), and that the set of true parameters Wo = {w E W ; p(xlw) = q(x) (a.s. q(x)) } is not empty. In Bayes statistics, the estimated distribution p(xlxn) is defined by J 1 n p(xlxn) = p(xlw) Pn(w)dw, Pn(w) = Zn IIp(XiIW) <p(w), where <p( w) is an a priori probability density on Rd, and Zn is a normalizing constant. The generalization error is defined by J q(x) K(n) = Exn { q(x) log p(xlxn) dx} Algebraic Analysis for Non-regular Learning Machines 357 where Ex" {-} shows the expectation value over all training samples xn. One of the main purposes in learning theory is to clarify how fast K(n) converges to zero as n tends to infinity. Using the log-loss function h(x, w) = logq(x) -logp(x, w), we define the K ullback distance and the empirical one, J 1 n H(w) = h(x, w)q(x)dx, H(w, xn) = ;; L h(Xi' w). t=l Note that the set of true parameters is equal to the set of zeros of H(w), Wo = {w E W ; H ( w) = O}. If the true parameter set Wo consists of only one point, the learning machine p(xlw) is called identifiable, if otherwise non-identifiable. It should be emphasized that, in non-identifiable learning machines, Wo is not a manifold but an analytic set with singular points, in general. Let us define the stochastic complexity by F(n} = -Exn {log J exp( -nH(w, xn))<p(w)dw}. (1) Then we have an important relation between the stochastic complexity F(n) and the generalization error K ( n ) K(n) = F(n + 1) - F(n), which represents that K(n) is equal to the increase of F(n) [1]. In this paper, we show the rigorous asymptotic form of the stochastic complexity F( n) for general non-identifiable learning machines. 2 Main Results We need three assumptions upon which the main results are proven. (A.I) The probability density <p(w} is infinite times continuously differentiable and its support, W == supp <p, is compact. In other words, <p E Cff. (A.2) The log loss function, h(x, w) = log q(x) - logp(x, w), is continuous for x in the support Q == suppq, and is analytic for w in an open set W' :> W. (A.3) Let {rj(x, w*); j = 1,2, ... , d} be the associated convergence radii of h(x, w) at w*, in other words, Taylor expansion of h(x, w) at w* = (wi, ... , wd), 00 h(x, w) = L ak 1k2 • .. kd(X)(WI - wi)kl (W2 - W2)k 2 ••• (Wd - Wd)kd, k1, .. ,kd=O absolutely converges in IWj - wjl < rj(x, w*). Assume inf inf rj(x, w*) > ° for xEQw'EW j=I,2, ... ,d. Theorem 1 Assume (A.l),(A.2), and (A.3). Then, there exist a rational number Al > 0, a natural number ml, and a constant C, such that IF(n) - A1logn + (ml - 1) loglognl < C holds for an arbitrary natural number n. Remarks. (1) If q(x) is compact supported, then the assumption (A.3) is automatically satisfied. (2) Without assumptions (A.l) and (A.3), we can prove the upper bound, F(n) ::; A1logn - (ml - 1) log log n + const. 358 S. Watanabe From Theorem 1, if the generalization error K (n) has the asymptotic expansion, then it should be K(n) = Al _ mi - 1 + o( 1 ). n nlogn nlogn As is well known, if the model is identifiable and has the positive definite Fisher information matrix, then Al = d/2 (d is the dimension of the parameter space) and mi = 1. However, hierarchical learning models such as multi-layer perceptrons, radial basis functions, and normal mixtures have smaller Al and larger ml, in other words, hierarchical models are better learning machines than regular ones if Bayes estimation is applied. Constants Al and mi are characterized by the following theorem. Theorem 2 Assume the same conditions as theorem 1. Let f > 0 be a sufficiently small constant. The holomorphic function in Re(z) > 0, J(z) = ( H(wrtp(w)dw, 1 H(W)<l can be analytically continued to the entire complex plane as a meromorphic function whose poles are on the negative part of the real axis, and the constants -AI and mi in theorem 1 are equal to the largest pole of J(z) and its multiplicity, respectively. The proofs of above theorems are explained in the following section. Let w = g(u) is an arbitrary analytic function from a set U C Rd to W. Then J(z) is invariant under the mapping, {H(w), tp(w)} -+ {H(g(u)), tp(g(u))Jg'(u) I}, where Jg'(u)1 = J det(awi/aUj)J is Jacobian. This fact shows that Al and mi are invariant under a bi-rational mapping. In section 4, we show an algorithm to calculate Al and mi by using this invariance and resolution of singularities. 3 Mathematical Structure In this section, we present an outline of the proof and its mathematical structure. 3.1 Upper bound and b-function For a sufficiently small constant f > 0, we define F*(n) by F*(n) = -log ( exp( -nH(w)) tp(w) dw. lH(w)<l Then by using the Jensen's inequality, we obtain F(n) ~ F*(n). To evaluate F*(n), we need the b-function in algebraic analysis [6][7]. Sato, Bernstein, Bjork, and Kashiwara proved that, for an arbitrary analytic function H(w), there exist a differential operator D(w, aw , z) which is a polynomial for z, and a polynomial b(z) whose zeros are rational numbers on the negative part of the real axis, such that D(w, aw , z)H(wr+1 = b(z)H(wr (2) for any z E C and any w E Wl = {w E W; H(w) < f}. By using the relation eq.(2), the holomorphic function J(z) in Re(z) > 0, J(z) == ( H(w)ztp(w)dw = b(l) { H(W)Z+I D~tp(w)dw, 1 H(W)<E z 1 H(W)<l Algebraic Analysis for Non-regular Learning Machines 359 can be analytically continued to the entire complex plane as a meromorphic function whose poles are on the negative part of the real axis. The poles, which are rational numbers and ordered from the origin to the minus infinity, are referred to as -AI, -A2' -A3, ... , and their multiplicities are also referred to as ml, m2, m3, ... Let Ckm be the coefficient of the m-th order of Laurent expansion of J(z) at -Ak. Then, K m" JK(Z) == J(z) - L L (z :~:)-m k=1 m=1 (3) is holomorphic in Re(z) > -AK+l, and IJK(z)1 ---t 0 (izi ---t 00, Re(z) > -AK+l)' Let us define a function J(t) = J <5(t - H(w))cp(w)dw for 0 < t < € and J(t) = 0 for € ~ t ~ 1. Then I(t) connects the function F*(n) with J(z) by the relations, J(z) 11 t Z J(t) dt, 1 F*(n) = -log 10 exp( -nt) J(t) dt. The inverse Laplace transform gives the asymptotic expansion of J(t) as t ---t 0, resulting in the asymptotic expansion of F*(n), i n t dt F*(n) = -log exp(-t) 1(-) o n n = Adogn - (ml - 1) loglogn + 0(1), which is the upper bound of F(n). 3.2 Lower Bound We define a random variable A(xn) = sup 1 nl/2(H(w, xn) - H(w)) / H(W)I/2 I. (4) wEW Then, we prove in Appendix that there exists a constant Co which is independent of n such that Ex n {A(xn)2} < Co. (5) By using an inequality ab ~ (a2 + b2)/2, 1 nH(w,xn) ~ nH(w) - A(xn)(nH(w))1/2 ~ "2{nH(w) - A(xn)2}, which derives a lower bound, (6) 360 S. Watanabe The first term in eq.(6) is bounded. Let the second term be F*(n) , then -IOg(Zl + Z2) r exp( - nH(w)) <p(w)dw ~ const. n-)'I (log n)m 1 -1 JH(W)<€ 2 r nH(w) nE JH(W)?€ exp(2 ) <p(w)dw ~ exp(-2)' which proves the lower bound of F( n), F(n) ~ >'llogn - (m1 - 1) loglogn + canst. 4 Resolution of Singularities In this section, we construct a method to calculate >'1 and mI. First of all, we cover the compact set Wo with a finite union of open sets Wo,. In other words, Wo C Uo, Wo,. Hironaka's resolution of singularities [5J [2J ensures that, for an arbitrary analytic function H(w), we can algorithmically find an open set Uo, C Rd (Uo, contains the origin) and an analytic function go, : Uo, ~ W o, such that H(go,(u)) = a(u) U~I U~2 ... U~ d (u E Uo, ) (7) where a( u) > 0 is a positive function and ki ~ 0 (1 ~ i ~ d) are even integers (a( u) and k i depend on Uo,). Note that Jacobian Ig~(u) 1 = 0 if and only if u E g~l(WO). finite ( ()) I I ( ) I """' PI P2 P,l + R( ) <p go, u go, U = ~ CPI ,P2'''',Pd u1 u2 .. , ud u , By combining eq.(7) with eq.(8), we obtain lo,(z) r H(w)z<p(w) Jwc> 1 a(u) {U~I U~2 .. . U~d V Ufl U~2 .. 'U~d dUl dU2 .. · dUd. U'" For real z, maxo, lo,(z) ~ l(z) ~ Lo, lo,(z), >'1 = min min min 0, (PI , ... ,Pd) l::;q::;d and m1 is equal to the number of q which attains the minimum, min. l::;q::;d (8) Remark. In a neighborhood of Wo E Wo, the analytic function H(w) is equivalent to a polynomial H Wo ( w ), in other words, there exists constants C1, C2 > 0 such that c1Hwo(w) ~ H(w) ~ C2Hwo(W) . Hironaka's theorem constructs the resolution map go, for any polynomial H Wo (w) algorithmically in the finite procedures ( blowingups for nonsingular manifolds in singularities are recursively applied [5]). From the above discussion, we obtain an inequality, 1 ~ m ~ d. Moreover there exists 'Y > 0 such that H(w) ~ 'Ylw - wol2 in the neighborhood of Wo E Wo, we obtain >'1 ~ d/2. Example. Let us consider a model (x, y) E R2 and w = (a, b, c, d) E R4 , p(x, ylw) ?jJ(x, a, b, c, d) 1 1 2 = Po(x) (271')1/2 exp(-"2(Y ?jJ(x, w)) ), atanh(bx) + ctanh(dx), Algebraic Analysis for Non-regular Learning Machines 361 where Po(x) is a compact support probability density (not estimated). We also assume that the true regression function is y = 'I/J(x, 0, 0, 0,0). The set of true parameters is Wo = {Ex'I/J(X, a, b, c, d)2 = O} = {ab + cd = 0 and ab3 + cd3 = O}. Assumptions (A.1),(A.2), and (A.3) are satisfied. The singularity in Wo which gives the smallest Al is the origin and the average loss function in the neighborhood WO of the origin is equivalent to the polynomial Ho(a, b, c, d) = (ab+cd)2 + (ab3 +cd3)2, (see[9]). Using blowing-ups, we find a map 9 : (x, y, z, w) t-+ (a, b, c, d), a = x, b = y3w - yzw, C = zwx, d = y, by which the singularity at the origin is resolved. J(z) r Ho(a, b, c, d)z<p(a, b, c, d)da db de dd iwo J { x2y6w2[1 + (z + w2(y2 - z)3)2JYlxy3wl<p(g(x, y, z, w)) dxdydzdw, which shows that Al = 2/3 and ml = 1, resulting that F(n) = (2/3) logn + Const. If the generalization error can be asymptotically expanded, then K(n) ~ (2/3n). 5 Conclusion Mathematical foundation for non-identifiable learning machines is constructed based on algebraic analysis and algebraic geometry. We obtained both the rigorous asymptotic form of the stochastic complexity and an algorithm to calculate it. Appendix In the appendix, we show the inequality eq.(5). Lemma 1 Assume conditions (A.1), (A.2) and (A.3). Then 1 n Exn {sup I r.;; L [ h(Xi, w) - Ex h(X, w) J 12} < 00. wEW yn i=1 This lemma is proven by using just the same method as [10]. In order to prove (5), we divide 'SUPwEW' in eq.(4) into 'SUPH(w)2':(' and'suPH(w)«'. Finiteness of the first half is directly proven by Lemma 1. Let us prove the second half is also finite. We can assume without loss of generality that w is in the neighborhood of Wo E Wo, because W can be covered by a finite union of neighborhoods. In each neighborhood, by using Taylor expansion of an analytic function, we can find functions {fj(x,w)} and {gj(w) = TIi(Wi -WOi)a;} such that J h(x, w) = L gj(w)fj(x, w), j=1 (9) where {fj(x, wo)} are linearly independent functions of x and gj(wo) = O. Since gj(w)fj(x, w) is a part of Taylor expansion among Wo, fJ(x, w) satisfies 1 n Exn {:~X-< I Vn ~(fj(Xi' w) - ExfJ(X, W))12} < 00. (10) 362 S. Watanabe By using a definition H(w) == IH(w, xn) - H(w)l, 1 n J 2 I;:;: L {L 9j (w)(!i (Xi, w) - Ex !j(X, w))}1 i=1 j=1 J J 1 n L9j(w)2 L{;:;: L(fj(Xi, w) - Ex !j(X, w))}2 < j=1 j=1 i=1 where we used Cauchy-Schwarz's inequality. On the other hand, the inequality log x :2: (1/2)(logx)2 X + 1 (x > 0) shows that J H(w) = J q(x) log tX )) dx :2: ~ J q(x)(log tX )) )2dx :2: a2 0 L 9j(w)2 px,w 2 px,w j=l where ao > 0 is the smallest eigen value of the positive definite symmetric matrix Ex {!i(X, WO)!k(X, wo)}. Lastly, combining A 2 J n n nH(w) ao "" 1 "" 2 A(X ) = :~K,< H(w) ::; 2 :~K,< ~ {Vn f=t(fj(Xi, w) - Ex !j(X, w))} with eq.(lO), we obtain eq.(5). Acknowledgments This research was partially supported by the Ministry of Education, Science, Sports and Culture in Japan, Grant-in-Aid for Scientific Research 09680362. References [1] Amari,S., Murata, N.(1993) Statistical theory of learning curves under entropic loss. Neural Computation, 5 (4) pp.140-153. [2] Atiyah, M.F. (1970) Resolution of singularities and division of distributions. Comm. Pure and Appl. Math., 13 pp.145-150. [3] Fukumizu,K. (1999) Generalization error of linear neural networks in unidentifiable cases. Lecture Notes in Computer Science, 1720 Springer, pp.51-62. [4] Hagiwara,K., Toda,N., Usui,S. (1993) On the problem of applying Ale to determine the structure of a layered feed-forward neural network. Proc. of IJCNN, 3 pp.2263-2266. [5] Hironaka, H. (1964) Resolution of singularities of an algebraic variety over a field of characteristic zero, I,ll. Annals of Math., 79 pp.109-326. [6] Kashiwara, M. (1976) B-functions and holonomic systems, Invent. Math., 38 pp.33-53. [7] Oaku, T. (1997) An algorithm of computing b-funcitions. Duke Math. J., 87 pp.115132. [8] Sato, M., Shintani,T. (1974) On zeta functions associated with prehomogeneous vector space.Annals of Math., 100, pp.131-170. [9] Watanabe, S.(1998) On the generalization error by a layered statistical model with Bayesian estimation. IEICE Trans., J81-A pp.1442-1452. English version: Elect. Comm. in Japan., to appear. [10] Watanabe, S. (1999) Algebraic analysis for singular statistical estimation. Lecture Notes in Computer Science, 1720 Springer, pp.39-50.
|
1999
|
64
|
1,714
|
Model selection in clustering by uniform convergence bounds* Joachim M. Buhmann and Marcus Held Institut flir Informatik III, RomerstraBe 164, D-53117 Bonn, Germany {jb,held}@cs.uni-bonn.de Abstract Unsupervised learning algorithms are designed to extract structure from data samples. Reliable and robust inference requires a guarantee that extracted structures are typical for the data source, Le., similar structures have to be inferred from a second sample set of the same data source. The overfitting phenomenon in maximum entropy based annealing algorithms is exemplarily studied for a class of histogram clustering models. Bernstein's inequality for large deviations is used to determine the maximally achievable approximation quality parameterized by a minimal temperature. Monte Carlo simulations support the proposed model selection criterion by finite temperature annealing. 1 Introduction Learning algorithms are designed to extract structure from data. Two classes of algorithms have been widely discussed in the literature - supervised and unsupervised learning. The distinction between the two classes depends on supervision or teacher information which is either available to the learning algorithm or missing. This paper applies statistical learning theory to the problem of unsupervised learning. In particular, error bounds as a protection against overfitting are derived for the recently developed Asymmetric Clustering Model (ACM) for co-occurrence data [6]. These theoretical results show that the continuation method "deterministic annealing" yields robustness of the learning results in the sense of statistical learning theory. The computational temperature of annealing algorithms plays the role of a control parameter which regulates the complexity of the learning machine. Let us assume that a hypothesis class 1£ of loss functions h(x; a) is given. These loss functions measure the quality of structures in data. The complexity of 1£ is controlled by coarsening, i.e., we define a 'Y-cover of 1£. Informally, the inference principle advocated by us performs learning by two inference steps: (i) determine the optimal approximation level l' for consistent learning (in terms of large risk deviations); (ii) given the optimal approximation level 1', average over all hypotheses in an appropriate neighborhood of the empirical minimizer. The result of the inference *This work has been supported by the German Israel Foundation for Science and Research Development (GIF) under grant #1-0403-001.06/95. Model Selection in Clustering by Uniform Convergence Bounds 217 procedure is not a single hypothesis but a set of hypotheses. This set is represented either by an average of loss functions or, alternatively, by a typical member of this set. This induction approach is named Empirical Risk Approximation (ERA) [2]. The reader should note that the learning algorithm has to return an average structure which is typical in a 'Y-cover sense but it is not supposed to return the hypothesis with minimal empirical risk as in Vapnik's "Empirical Risk Minimization" (ERM) induction principle for classification and regression [9]. The loss function with minimal empirical risk is usually a structure with maximal complexity, e.g., in clustering the ERM principle will necessarily yield a solution with the maximal number of clusters. The ERM principle, therefore, is not suitable as a model selection principle to determine the number of clusters which are stable under sample fluctuations. The ERA principle with its approximation accuracy 'Y solves this problem by controlling the effective complexity of the hypothesis class. In spirit, this approach is similar to the Gibbs-algorithm presented for example in [3]. The Gibbs-algorithm samples a random hypothesis from the version space to predict the label of the 1 + lth data point Xl+!o The version space is defined as the set of hypotheses which are consistent with the first 1 given data points. In our approach we use an alternative definition of consistency, where all hypothesis in an appropriate neighborhood of the empirical minimizer define the version space (see also [4]). Averaging over this neighborhood yields a structure with risk equivalent to the expected risk obtained by random sampling from this set of hypotheses. There exists also a tight methodological relationship to [7] and [4] where learning curves for the learning of two class classifiers are derived using techniques from statistical mechanics. 2 The Empirical Risk Approximation Principle The data samples Z = {zr E 0, 1 ~ r ~ l} which have to be analyzed by the unsupervised learning algorithm are elements of a suitable object (resp. feature) space O. The samples are distributed according to a measure J.L which is not assumed to be known for the analysis.l A mathematically precise statement of the ERA principle requires several definitions which formalize the notion of searching for structure in the data. The quality of structures extracted from the data set Z is evaluated by the empirical risk R(a; Z) := t 2:~=1 h(zr; a) of a structure a given the training set Z. The function h(z; a) is known as loss function in statistics. It measures the costs for processing a generic datum z with model a. Each value a E A parameterizes an individual loss function with A denoting the set of possible parameters. The loss function which minimizes the empirical risk is denoted by &1. := arg minaEA R( a; Z). The relevant quality measure for learning is the expected risk R(a) .In h(z; a) dJ.L(z). The optimal structure to be inferred from the data is a1. .argminaEA R(a). The distribution J.L is assumed to decay sufficiently fast with bounded rth moments Ell {Ih(z; a) - R(a)IT} ~ rh·r - 2V II {h(z; an, 'Va E A (r > 2). Ell {.} and VII {.} denote expectation and variance of a random variable, respectively. T is a distribution dependent constant. ERA requires the learning algorithm to determine a set hypotheses on the basis of the finest consistently learnable cover of the hypothesis class. Given a learning accuracy 'Y a subset of parameters A-y = {al,'" ,aIA-yI-l} U {&1.} can be defined such that the hypothesis class 1i is covered by the function balls with index sets B-y(a) := {a' : In Ih(z; a') - h(z; a)1 dJ.L(z) ~ 'Y}, i. e. A C UaEA-y B-y(a). The em1 Knowledge of covering numbers is required in the following analysis which is a weaker type of information than complete knowledge of the probability measure IL (see also [5]). 218 J. M Buhmann and M Held pirical minimizer &1. has been added to the cover to simplify bounding arguments. Large deviation theory is used to determine the approximation accuracy '1 for learning a hypothesis from the hypothesis class 11.. The expected risk of the empirical minimizer exceeds the global minimum of the expected risk R(01.) by faT with a probability bounded by Bernstein's inequality [8] < P { sup IR(o) - R(o)1 ~ -21 (faT - 'Y)} aEA-y ( l(f-'Y/aT)2) _ < 21A')'1 exp - 8 + 4r (f _ 'Y/aT) = o. (1) The complexity I A')' I of the coarsened hypothesis class has to be small enough to guarantee with high confidence small f-deviations. 2 This large deviation inequality weighs two competing effects in the learning problem, i. e. the probability of a large deviation exponentially decreases with growing sample size I, whereas a large deviation becomes increasingly likely with growing cardinality of the 'Y-cover of the hypothesis class. According to (1) the sample complexity Io (-y, f, 0) is defined by to (f - '1/ aT) 2 2 log IA')'I - 8 + 4r (f _ 'Y/aT) + log "8 = o. (2) With probability 1 - 0 the deviation of the empirical risk from the expected risk is bounded by ~ (foPta T - '1) =: 'Yapp • Averaging over a set of functions which exceed the empirical minimizer by no more than 2'Yapp in empirical risk yields an average hypothesis corresponding to the statistically significant structure in the data, i.e., R( 01.) - R( &1.) ~ R( 01. ) + 'Yapp - (R( &1.) - 'Yapp ) ~ 2'Yapp since R( 01.) ~ R( &1.) by definition. The key task in the following remains to calculate the minimal precision f( '1) as a function of the approximation '1 and to bound from above the cardinality I A')' I of the 'Y-cover for specific learning problems. 3 Asymmetric clustering model The asymmetric clustering model was developed for the analysis resp. grouping of objects characterized by co-occurrence of objects and certain feature values [6]. Application domains for this explorative data analysis approach are for example texture segmentation, statistical language modeling or document retrieval. Denote by n = X x y the product space of objects Xi EX, 1 ~ i ~ nand features Y j E y, 1 ~ j ~ j. The Xi E X are characterized by observations Z = {zr} = {(Xi(r),Yj(r)) ,T = 1, ... ,l}. The sufficient statistics of how often the object-feature pair (Xi, Y j) occurs in the data set Z is measured by the set of frequencies {'f]ij : number of observations (Xi, Yj) /total number of observations}. Derived measurements are the frequency of observi~g object Xi, i. e. 'f]i = 2:;=1 'f]ij and the frequency of observing feature Yj given object Xi, i. e. 'f]jli = 'f]ij/'f]i. The asymmetric clustering model defines a generative model of a finite mixture of component probability distributions in feature space with cluster-conditional distributions q = (qjlv) ' 1 ~ j ~ j, 1 ~ v ~ k (see [6]). We introduce indicator variables M iv E {O, 1} for the membership of object Xi in cluster v E {I, ... ,k}. 2::=1 M iv = 1 Vi : 1 ~ i ~ n enforces the uniqueness constraint for assignments. 2The maximal standard deviation (1 T := sUPaEA-y y'V {h(z; a)} defines the scale to measure deviations of the empirical risk from the expected risk (see [2]). Model Selection in Clustering by Uniform Convergence Bounds 219 Using these variables the observed data Z are distributed according to the generative model over X x y: 1 k P {xi,YjIM,q} = - ~ Mivqjlv' (3) n L--v=1 For the analysis of the unknown data source characterized (at least approximatively) by the empirical data Z a structure 0: = (M, q) with M E {O, I} n x k has to be inferred. The aim of an ACM analysis is to group the objects Xi as coded by the unknown indicator variables M iv and to estimate for each cluster v a prototypical feature distribution qjlv' Using the loss function h(Xi' Yj; 0:) = logn 2:~=1 M iv logqjlv the maximization of the likelihood can be formulated as minimization of the empirical risk: R(o:; Z) = 2:~=1 2:;=11}ijh(xi, Yj; 0:), where the essential quantity to be minimized is the expected risk: R(o:) = 2:~=1 2:;=1 ptrue {Xi, Yj} h(Xi' Yj; 0:). Using the maximum entropy principle the following annealing equations are derived [6]: A 2:~1 (Miv)1}ij _ ~n (Miv)1}i (4) qjlv "n (M ) - L--. "n (M )1}j1i, wi=1 iv t=1 wh=1 hv exp [.8 2:;=1 1}jli log Q]lv ] The critical temperature: Due to the limited precision of the observed data it is natural to study histogram clustering as a learning problem with the hypothesis class 1£ = {-2:vMivlogqjlv :Miv E {0,1} /\ 2:vMiv = 1/\ Qjlv E H,t, .. · ,1}/\ 2:j qjlv = I}. The limited number of observations results in a limited precision of the frequencies 1}jli' The value Q;lv = 0 has been excluded since it causes infinite expected risk for ptrue {Yj IXi} > O. The size of the regularized hypothesis class A-y can be upper bounded by the cardinality of the complete hypothesis class divided by the minimal cardinality of a 'Y-function ball centered at a function of the 'Y-cover A-y, i. e. IA-yl ~ 11£1/!llin IB-y(&)I. oEA'T The cardinality of a function ball with radius 'Y can be approximated by adopting techniques from asymptotic analysis [1] (8 (x) = g for x ~ 0): IB-y(5)1 = L L 8 ('Y - L ~ptrue {Yj IXi} IIOg ~~I~(i) I) (6) M { . } i J' %Im(t) q,lo • and the entropy S is given by S(q,Q,x) = 'Yx - Lv Qv (L j qjlv -1) + .!. ~ ,log ~ exp (-x ~ , ptrue {Yj IXi} IIOg _ Qjlp I). (7) n L--, L--p L--J %Im(i) The auxiliary variables Q = {Q v } ~=1 are Lagrange parameters to enforce the normalizations 2:j qjlv = 1. Choosing %10 = qjlm(i) Vm(i) = 0:, we obtain an approximation of the integral. The reader should note that a saddlepoint approximation in 220 J. M Buhmann and M Held the usual sense is only applicable for the parameter x but will fail for the q, Q parameters since the integrand is maximal at the non-differentiability point of the absolute value function. We, therefore, expand S (q, Q,x) up to linear terms 0 (q - q) and integrate piece-wise. Using the abbreviation Kill := Lj ptrue {Yj Ixd IIog qj~:~i) I the following saddle point approximation for the integral over x is obtained: 1 I:n I:k • exp ( -XKia) , = -. Pij.£Kjlj.£ wIth Pia = L (~)" n t=1 j.£=1 j.£ exp -XKij.£ (8) The entropy S evaluated at q = q yields in combination with the Laplace approximation [1] an estimate for the cardinality of the ,-cover log I A')' I = n (log k - S) + -21 I:. KipP ip (I: P illKill KiP) x2 t,p II (9) where the second term results from the second order term of the Taylor expansion around the saddle point. Inserting this complexity in equation (2) yields an equation which determines the required number of samples 10 for a fixed precision f and confidence o. This equation defines a functional relationship between the precision f and the approximation quality, for fixed sample size 10 and confidence o. Under this assumption the precision f depends on , in a non-monotone fashion, i. e. (10) using the abbreviation C = log I A')' I + log~. The minimum of the function €(,) defines a compromise between uncertainty originating from empirical fluctuations and the loss of precision due to the approximation by a ,-cover. Differentiating with respect to , and setting the result to zero (df(T)/d, = 0) yields as upper bound for the inverse temperature: ~ 1 10 ( 10+C7"2 )-1 x < 7" + -;:;~:;;;==~iiT (1T 2n V210C + 7"2C2 (11) Analogous to estimates of k-means, phase-transitions occur in ACM while lowering the temperature. The mixture model for the data at hand can be partitioned into more and more components, revealing finer and finer details of the generation process. The critical xopt defines the resolution limit below which details can not be resolved in a reliable fashion on the basis of the sample size 10 . Given the inverse temperature x the effective cardinality of the hypothesis class can be upper bounded via the solution of the fix point equation (8). On the other hand this cardinality defines with (11) and the sample size lo an upper bound on x. Iterating these two steps we finally obtain an upper bound for the critical inverse temperature given a sample size 10. Empirical Results: For the evaluation of the derived theoretical result a series of Monte-Carlo experiments on artificial data has been performed for the asymmetric clustering model. Given the number of objects n = 30, the number of groups k = 5 and the size of the histograms f = 15 the generative model for this experiments was created randomly and is summarized in fig. 1. From this generative model sample sets of arbitrary size can be generated and the true distributions ptrue {Yj IXi} can be calculated. In figure 2a,b the predicted temperatures are compared to the empirically observed critical temperatures, which have been estimated on the basis of 2000 different samples of randomly generated co-occurrence data for each 10. The expected risk (solid) Model Selection in Clustering by Uniform Convergence Bounds 221 v qjlv 1 0.11,0.01,0.11,0.07,0.08,0.04,0.06,0,0.13,0.07, 0.08, 0.1, 0, 0.11,0.031 2 0.18,0.1,0.09,0.02,0.05,0.09,0.08,0.03,0.06, 0.07, 0.03, 0.02, 0.07, 0.06, 0.05} 3 0.17,0.05,0.05,0.06,0.06,0.05,0.03,0.11,0.09,0, 0.02,0.1,0.03,0.07, 0.11} 4 0.15,0.07,0.1,0.03,0.09,0.03,0.04,0.05,0.06, 0.05,0.08,0.04,0.08,0.09, 0.04} 5 0.09,0.09,0.07,0.1,0.07,0.06,0.06,0.11,0.07,0.07, 0.1, 0.02,0.07,0.02, O} m(i} = (5,3,2,5,2,2,5,4,2,2,2,4,1,5,3,5,3,4,1, 2,2,3,1,1,2, 5, 5, 2, 2, 1) Figure 1: Generative ACM model for the Monte-Carlo experiments. and empirical risk (dashed) of these 2000 inferred models are averaged. Overfitting sets in when the expected risk rises as a function of the inverse temperature x. Figure 2c indicates that on average the minimal expected risk is assumed when the effective number is smaller than or equal 5, i. e. the number of clusters of the true generative model. Predicting the right computational temperature, therefore, also enables the data analyst to solve the cluster validation problem for the asymmetric clustering model. Especially for 10 = 800 the sample fluctuations do not permit the estimate of five clusters and the minimal computational temperature prevents such an inference result. On the other hand for lo = 1600 and 10 = 2000 the minimal temperature prevents the algorithm to infer too many clusters, which would be an instance of overfitting. As an interesting point one should note that for an infinite number of observations the critical inverse temperature reaches a finite positive value and not more than the five effective clusters are extracted. At this point we conclude, that for the case of histogram clustering the Empirical Risk Approximation solves for realizable rules the problem of model validation, i. e. choosing the right number of clusters. Figure 2d summarizes predictions of the critical temperature on the basis of the empirical distribution 1]ij rather than the true distribution ptrue {Xi, Yj}. The empirical distribution has been generated by a training sample set with x of eq. (11) being used as a plug-in estimator. The histogram depicts the predicted inverse temperature for 10 = 1200. The average of these plug-in estimators is equal to the predicted temperature for the true distribution. The estimates of x are biased towards too small inverse temperatures due to correlations between the parameter estimates and the stopping criterion. It is still an open question and focus of ongoing work to rigorously bound the variance of this plug- in estimator. Empirically we observe a reduction of the variance of the expected risk occurring at the predicted temperature for higher sample sizes lo. 4 Conclusions The two conditions that the empirical risk has to uniformly converge towards the expected risk and that all loss functions within an 2,&PP -range of the global empirical risk minimum have to be considered in the inference process limits the complexity of the underlying hypothesis class for a given number of samples. The maximum entropy method which has been widely employed in deterministic annealing procedures for optimization problems is substantiated by our analysis. Solutions with too many clusters clearly overfit the data and do not generalize. The condition that the hypothesis class should only be divided in function balls of size , forces us to stop the stochastic search at the lower bound of the computational temperature. Another important result of this investigation is the fact that choosing the right stopping temperature for the annealing process not only avoids overfitting but also solves the cluster validation problem in the realizable case of ACM. A possible inference of too many clusters using the empirical risk functional is suppressed. 222 80 a) 78 6 --_";',' 4 2 0 680 11 10 9 ill 8 ~ [ 7 '0 6 ~ 5 ~ <II 4 .;, ~ 3 <II 2 c) o 5 \ \ \ \ \ \ \ ,/ 1-' 0 /' _ ... lo800emP ) / ._yo 101200lmp / 'o'2000mp / ! / \ / -~/ \ \ \ \ \ \ , '----------------------'0 ,5 20 26 30 35 inverse temperature p .~, - ~ ......... -.--~~ ..... -------.10 i I , .i ," .. ,15 20 inverse temperature ~ ,._-_._-25 30 35 J. M Buhmann and M Held 80 b) 78 76 '" ~74 72 70 \ \ , ~~ ~" "--- ... _----- ...... 680~---~---~ 'O---~'~5---~2~ 0 ---~25~---3O~---~35 · inverse temperature ~ Figure 2: Comparison between the theoretically derived upper bound on x and the observed critical temperatures (minimum of the expected risk vs. x curve). Depicted are the plots for 10 = 800,1200,1600,2000. Vertical lines indicate the predicted critical temperatures. The average effective number of clusters is drawn in part c. In part d the distribution of the plug- in estimates is shown for La = 1200. References [1] N. G. De Bruijn. Asymptotic Methods in Analysis. North-Holland Publishing Co., (repr. Dover), Amsterdam, 1958, (1981). [2] J . M. Buhmann. Empirical risk approximation. Technical Report IAI-TR 98-3, Institut fur Informatik III, Universitat Bonn, 1998. [3] D. Haussler, M. Kearns, and R. Schapire. Bounds on the sample complexity of Bayesian learning using information theory and the VC dimension. Machine Learning, 14(1):83113, 1994. [4] D. Haussler, M. Kearns, H.S. Seung, and N. Tishby. Rigorous learning curve bounds from statistical mechanics. Machine Learning, 25:195- 236, 1997. [5] D. Haussler and M. Opper. Mutual information, metric entropy and cumulative relative entropy risk. Annals of Statistics, December 1996. [6] T . Hofmann, J. Puzicha, and M.I. Jordan. Learning from dyadic data. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors, Advances in Neural Information Processing Systems 11. MIT Press, 1999. to appear. [7] H. S. Seung, H. Sompolinsky, and N. Tishby. Statistical mechanics of learning from examples. Physical Review A, 45(8):6056-6091, April 1992. [8] A. W. van der Vaart and J. A. Wellner. Weak Convergence and Empirical Processes. Springer-Verlag, New York, Berlin, Heidelberg, 1996. [9] V. N. Vapnik. Statistical Learning Theory. Wiley- Interscience, New York, 1998.
|
1999
|
65
|
1,715
|
U nmixing Hyperspectral Data Lucas Parra, Clay Spence, Paul Sajda Sarnoff Corporation, CN-5300, Princeton, NJ 08543, USA {lparra, cspence,psajda} @sarnoff.com Andreas Ziehe, Klaus-Robert Miiller GMD FIRST.lDA, Kekulestr. 7, 12489 Berlin, Germany {ziehe,klaus}@first.gmd.de Abstract In hyperspectral imagery one pixel typically consists of a mixture of the reflectance spectra of several materials, where the mixture coefficients correspond to the abundances of the constituting materials. We assume linear combinations of reflectance spectra with some additive normal sensor noise and derive a probabilistic MAP framework for analyzing hyperspectral data. As the material reflectance characteristics are not know a priori, we face the problem of unsupervised linear unmixing. The incorporation of different prior information (e.g. positivity and normalization of the abundances) naturally leads to a family of interesting algorithms, for example in the noise-free case yielding an algorithm that can be understood as constrained independent component analysis (ICA). Simulations underline the usefulness of our theory. 1 Introduction Current hyperspectral remote sensing technology can form images of ground surface reflectance at a few hundred wavelengths simultaneously, with wavelengths ranging from 0.4 to 2.5 J.Lm and spatial resolutions of 10-30 m. The applications of this technology include environmental monitoring and mineral exploration and mining. The benefit of hyperspectral imagery is that many different objects and terrain types can be characterized by their spectral signature. The first step in most hyperspectral image analysis systems is to perform a spectral unmixing to determine the original spectral signals of some set of prime materials. The basic difficulty is that for a given image pixel the spectral reflectance patterns of the surface materials is in general not known a priori. However there are general physical and statistical priors which can be exploited to potentially improve spectral unmixing. In this paper we address the problem of unmixing hyperspectral imagery through incorporation of physical and statistical priors within an unsupervised Bayesian framework. We begin by first presenting the linear superposition model for the reflectances measured. We then discuss the advantages of unsupervised over supervised systems. Unmixing Hyperspectral Data 943 We derive a general maximum a posteriori (MAP) framework to find the material spectra and infer the abundances. Interestingly, depending on how the priors are incorporated, the zero noise case yields (i) a simplex approach or (ii) a constrained leA algorithm. Assuming non-zero noise our MAP estimate utilizes a constrained least squares algorithm. The two latter approaches are new algorithms whereas the simplex algorithm has been previously suggested for the analysis of hyperspectral data. Linear Modeling To a first approximation the intensities X (Xi>.) measured in each spectral band A = 1, ... , L for a given pixel i = 1, ... , N are linear combinations of the reflectance characteristics S (8m >.) of the materials m = 1, ... , M present in that area. Possible errors of this approximation and sensor noise are taken into account by adding a noise term N (ni>'). In matrix form this can be summarized as X = AS + N, subject to: AIM = lL, A ~ 0, (1) where matrix A (aim) represents the abundance of material m in the area corresponding to pixel i, with positivity and normalization constraints. Note that ground inclination or a changing viewing angle may cause an overall scale factor for all bands that varies with the pixels. This can be incorporated in the model by simply replacing the constraint AIM = lL with AIM ~ lL which does does not affect the discussion in the remainder of the paper. This is clearly a simplified model of the physical phenomena. For example, with spatially fine grained mixtures, called intimate mixtures, multiple reflectance may causes departures from this first order model. Additionally there are a number of inherent spatial variations in real data, such as inhomogeneous vapor and dust particles in the atmosphere, that will cause a departure from the linear model in equation (1). Nevertheless, in practical applications a linear model has produced reasonable results for areal mixtures. Supervised vs. Unsupervised techniques Supervised spectral un mixing relies on the prior knowledge about the reflectance patterns S of candidate surface materials, sometimes called endmembers, or expert knowledge and a series of semiautomatic steps to find the constituting materials in a particular scene. Once the user identifies a pixel i containing a single material, i.e. aim = 1 for a given m and i, the corresponding spectral characteristics of that material can be taken directly from the observations, i.e., 8 m >. = Xi>. [4]. Given knowledge about the endmembers one can simply find the abundances by solving a constrained least squares problem. The problem with such supervised techniques is that finding the correct S may require substantial user interaction and the result may be error prone, as a pixel that actually contains a mixture can be misinterpreted as a pure endmember. Another approach obtains endmembers directly from a database. This is also problematic because the actual surface material on the ground may not match the database entries, due to atmospheric absorption or other noise sources. Finding close matches is an ambiguous process as some endmembers have very similar reflectance characteristics and may match several entries in the database. Unsupervised unmixing, in contrast, tries to identify the endmembers and mixtures directly from the observed data X without any user interaction. There are a variety of such approaches. In one approach a simplex is fit to the data distribution [7, 6, 2]. The resulting vertex points of the simplex represent the desired endmembers, but this technique is very sensitive to noise as a few boundary points can potentially change the location of the simplex vertex points considerably. Another approach by Szu [9] tries to find abundances that have the highest entropy subject to constraints that the amount of materials is as evenly distributed as possible - an assumption 944 L. Parra, C. D. Spence, P Sajda, A. Ziehe and K.-R. Muller which is clearly not valid in many actual surface material distributions. A relatively new approach considers modeling the statistical information across wavelength as statistically independent AR processes [1]. This leads directly to the contextual linear leA algorithm [5]. However, the approach in [1] does not take into account constraints on the abundances, noise, or prior information. Most importantly, the method [1] can only integrate information from a small number of pixels at a time (same as the number of endmembers). Typically however we will have only a few endmembers but many thousand pixels. 2 The Maximum A Posterior Framework 2.1 A probabilistic model of unsupervised spectral unmixing Our model has observations or data X and hidden variables A, S, and N that are explained by the noisy linear model (1). We estimate the values of the hidden variables by using MAP (A SIX) = p(XIA, S)p(A, S) = Pn(XIA, S)Pa(A)ps(S) p , p(X) p(X) (2) with Pa(A), Ps(S), Pn(N) as the a priori assumptions of the distributions. With MAP we estimate the most probable values for given priors after observing the data, A MAP, SMAP = argmaxp(A, SIX) (3) A,S Note that for maximization the constant factor p(X) can be ignored. Our first assumption, which is indicated in equation (2) is that the abundances are independent of the reflectance spectra as their origins are completely unrelated: (AO) A and S are independent. The MAP algorithm is entirely defined by the choices of priors that are guided by the problem of hyperspectral unmixing: (AI) A represent probabilities for each pixel i. (A2) S are independent for different material m. (A3) N are normal i.i.d. for all i, A. In summary, our MAP framework includes the assumptions AO-A3. 2.2 Including Priors Priors on the abundances Positivity and normalization of the abundances can be represented as, (4) where 60 represent the Kronecker delta function and eo the step function. With this choice a point not satisfying the constraint will have zero a posteriori probability. This prior introduces no particular bias of the solutions other then abundance constraints. It does however assume the abundances of different pixels to be independent. Prior on spectra Usually we find systematic trends in the spectra that cause significant correlation. However such an overall trend can be subtracted and/or filtered from the data leaving only independent signals that encode the variation from that overall trend. For example one can capture the conditional dependency structure with a linear auto-regressive (AR) model and analyze the resulting "innovations" or prediction errors [3]. In our model we assume that the spectra represent independent instances of an AR process having a white innovation process em.>. distributed according to Pe(e). With a Toeplitz matrix T of the AR coefficients we Unmixing Hyperspectral Data 945 can write, em = Sm T. The AR coefficients can be found in a preprocessing step on the observations X. If S now represents the innovation process itself, our prior can be represented as, M L L Pe (S) <X Pe(ST) = II II Pe( L sm>.d>.>.,) , (5) m=1 >.=1 >.'=1 Additionally Pe (e) is parameterized by a mean and scale parameter and potentially parameters determining the higher moments of the distributions. For brevity we ignore the details of the parameterization in this paper. Prior on the noise As outlined in the introduction there are a number of problems that can cause the linear model X = AS to be inaccurate (e.g. multiple reflections, inhomogeneous atmospheric absorption, and detector noise.) As it is hard to treat all these phenomena explicitly, we suggest to pool them into one noise variable that we assume for simplicity to be normal distributed with a wavelength dependent noise variance a>., L p(XIA, S) = Pn(N) = N(X - AS,~) = II N(x>. - As>., a>.l) , (6) >.=1 where N (', .) represents a zero mean Gaussian distribution, and 1 the identity matrix indicating the independent noise at each pixel. 2.3 MAP Solution for Zero Noise Case Let us consider the noise-free case. Although this simplification may be inaccurate it will allow us to greatly reduce the number of free hidden variables - from N M + M L to M2 . In the noise-free case the variables A, S are then deterministically dependent on each other through a N L-dimensional 8-distribution, Pn(XIAS) = 8(X - AS). We can remove one of these variables from our discussion by integrating (2). It is instructive to first consider removing A p(SIX) <X I dA 8(X - AS)Pa(A)ps(S) = IS-1IPa(XS- 1 )Ps(S). (7) We omit tedious details and assume L = M and invertible S so that we can perform the variable substitution that introduces the Jacobian determinant IS-II . Let us consider the influence of the different terms. The Jacobian determinant measures the volume spanned by the endmembers S. Maximizing its inverse will therefore try to shrink the simplex spanned by S. The term Pa(XS- 1 ) should guarantee that all data points map into the inside of the simplex, since the term should contribute zero or low probability for points that violate the constraint. Note that these two terms, in principle, define the same objective as the simplex envelope fitting algorithms previously mentioned [2]. In the present work we are more interested in the algorithm that results from removing S and finding the MAP estimate of A. We obtain (d. Eq.(7)) p(AIX) oc I dS 8(X - AS)Pa(A)ps(S) = IA -llps(A- 1X)Pa(A). (8) For now we assumed N = M. 1 If Ps (S) factors over m, i.e. endmembers are independent, maximizing the first two terms represents the leA algorithm. However, lIn practice more frequently we have N > M. In that case the observations X can be mapped into a M dimensional subspace using the singular value decomposition (SVD), X = UDVT , The discussion applies then to the reduced observations X = u1x with U M being the first M columns of U . 946 L. Parra. C. D. Spence. P Sajda. A. Ziehe and K.-R. Muller the prior on A will restrict the solutions to satisfy the abundance constraints and bias the result depending on the detailed choice of Pa(A), so we are led to constrained ICA. In summary, depending on which variable we integrate out we obtain two methods for solving the spectral unmixing problem: the known technique of simplex fitting and a new constrained ICA algorithm. 2.4 MAP Solution for the Noisy Case Combining the choices for the priors made in section 2.2 (Eqs.(4), (5) and (6)) with (2) and (3) we obtain (9) AMAP, SMAP = "''i~ax ft {g N(x", - a,s" a,) ll. P,(t. 'm,d",) } , subject to AIM = lL, A 2: O. The logarithm of the cost function in (9) is denoted by L = L(A, S). Its gradient with respect to the hidden variables is 88L = _AT nm diag(O')-l - fs(sm) (10) Sm where N = X - AS, nm are the M column vectors of N, fs(s) = - olnc;(s). In (10) fs is applied to each element of Sm. The optimization with respect to A for given S can be implemented as a standard weighted least squares (L8) problem with a linear constraint and positivity bounds. Since the constraints apply for every pixel independently one can solve N separate constrained LS problems of M unknowns each. We alternate between gradient steps for S and explicit solutions for A until convergence. Any additional parameters of Pe(e) such as scale and mean may be obtained in a maximum likelihood (ML) sense by maximizing L. Note that the nonlinear optimization is not subject to constraints; the constraints apply only in the quadratic optimization. 3 Experiments 3.1 Zero Noise Case: Artificial Mixtures In our first experiment we use mineral data from the United States Geological Survey (USGS)2 to build artificial mixtures for evaluating our unsupervised unmixing framework. Three target endmembers where chosen (Almandine WS479, Montmorillonite+Illi CM42 and Dickite NMNH106242). A spectral scene of 100 samples was constructed by creating a random mixture of the three minerals. Of the 100 samples, there were no pure samples (Le. no mineral had more than a 80% abundance in any sample). Figure 1A is the spectra of the endmembers recovered by the constrained ICA technique of section 2.3, where the constraints were implemented with penalty terms added to the conventional maximum likelihood ICA algorithm. These are nearly identical to the spectra of the true endmembers, shown in figure 1B, which were used for mixing. Interesting to note is the scatter-plot of the 100 samples across two bands. The open circles are the absorption values at these two bands for endmembers found by the MAP technique. Given that each mixed sample consists of no more than 80% of any endmember, the endmember points on the scatter-plot are quite distant from the cluster. A simplex fitting technique would have significant difficulty recovering the endmembers from this clustering. 2see http://speclab.cr . usgs.gov /spectral.lib.456.descript/ decript04.html Unmixing Hyperspectral Data found endmembers O~------' 50 100 150 200 wavelength A target endmembers O~------' 50 100 150 200 wavelength B 947 observed X and found S g 0.8 o ~ ~0.6 ., ~ ~ 0.4 o 0.2'---~------' 0.4 0.6 0.8 wavelength=30 C Figure 1: Results for noise-free artificial mixture. A recovered endmembers using MAP technique. B "true" target endmembers. C scatter plot of samples across 2 bands showing the absorption of the three endmembers computed by MAP (open circles). 3.2 Noisy Case: Real Mixtures To validate the noise model MAP framework of section 2.4 we conducted an experiment using ground truthed USGS data representing real mixtures. We selected lOxl0 blocks of pixels from three different regions3 in the AVIRIS data of the Cuprite, Nevada mining district. We separate these 300 mixed spectra assuming two endmembers and an AR detrending with 5 AR coefficients and the MAP techniques of section 2.4. Overall brightness was accounted for as explain in the linear modeling of section 1. The endmembers are shown in figure 2A and B in comparison to laboratory spectra from the USGS spectral library for these minerals [8J. Figure 2C shows the corresponding abundances, which match the ground truth; region (III) mainly consists of Muscovite while regions (1)+(I1) contain (areal) mixtures of Kaolinite and Muscovite. 4 Discussion Hyperspectral unmixing is a challenging practical problem for unsupervised learning. Our probabilistic approach leads to several interesting algorithms: (1) simplex fitting, (2) constrained ICA and (3) constrained least squares that can efficiently use multi-channel information. An important element of our approach is the explicit use of prior information. Our simulation examples show that we can recover the endmembers, even in the presence of noise and model uncertainty. The approach described in this paper does not yet exploit local correlations between neighboring pixels that are well known to exist. Future work will therefore exploit not only spectral but also spatial prior information for detecting objects and materials. Acknowledgments We would like to thank Gregg Swayze at the USGS for assistance in obtaining the data. 3The regions were from the image plate2.cuprite95.alpha.2um.image.wlocals.gif in ftp:/ /speclab.cr.usgs.gov /pub/cuprite/gregg.thesis.images/, at the coordinates (265,710) and (275,697), which contained Kaolinite and Muscovite 2, and (143,661), which only contained Muscovite 2. 948 0.65 0.6 0.55 0.5 Muscovite 'c .•.• ", "'0 .. ' ., 0.45 0.4,--~--:-:-:-"-~----:--:--~ 160 190 200 210 220 waveleng1h A L. Parra, C. D, Spence, P Sajda, A. Ziehe and K-R. Muller Kaolinite 0.8 0.7 0.6 0.4 0.3 180 190 200 210 220 wavelength B C Figure 2: A Spectra of computed endmember (solid line) vs Muscovite sample spectra from the USGS data base library. Note we show only part of the spectrum since the discriminating features are located only between band 172 and 220. B Computed endmember (solid line) vs Kaolinite sample spectra from the USGS data base library. C Abundances for Kaolinite and Muscovite for three regions (lighter pixels represent higher abundance). Region 1 and region 2 have similar abundances for Kaolinite and Muscovite, while region 3 contains more Muscovite. References [1] J. Bayliss, J. A. Gualtieri, and R. Cromp. Analyzing hyperspectral data with independent component analysis. In J. M. Selander, editor, Proc. SPIE Applied Image and Pattern Recognition Workshop, volume 9, P.O. Box 10, Bellingham WA 98227-0010, 1997. SPIE. [2] J.W. Boardman and F.A. Kruse. Automated spectral analysis: a geologic example using AVIRIS data, north Grapevine Mountains, Nevada. In Tenth Thematic Conference on Geologic Remote Sensing, pages 407-418, Ann arbor, MI, 1994. Environmental Research Institute of Michigan. [3] S. Haykin. Adaptive Filter Theory. Prentice Hall, 1991. [4] F. Maselli, , M. Pieri, and C. Conese. Automatic identification of end-members for the spectral decomposition of remotely sensed scenes. Remote Sensing for Geography, Geology, Land Planning, and Cultural Heritage (SPIE), 2960:104109,1996. [5] B. Pearlmutter and L. Parra. Maximum likelihood blind source separation: A context-sensitive generalization ofICA. In M. Mozer, M. Jordan, and T. Petsche, editors, Advances in Neural Information Processing Systems 9, pages 613-619, Cambridge MA, 1997. MIT Press. [6] J.J. Settle. Linear mixing and the estimation of ground cover proportions. International Journal of Remote Sensing, 14:1159-1177,1993. [7] M.O. Smith, J .B. Adams, and A.R. Gillespie. Reference endmembers for spectral mixture analysis. In Fifth Australian remote sensing conference, volume 1, pages 331-340, 1990. [8] U.S. Geological Survey. USGS digital spectral library. Open File Report 93-592, 1993. [9] H. Szu and C. Hsu. Landsat spectral demixing a la superresolution of blind matrix inversion by constraint MaxEnt neural nets. In Wavelet Applications IV, volume 3078, pages 147-160. SPIE, 1997.
|
1999
|
66
|
1,716
|
Some Theoretical Results Concerning the Convergence of Compositions of Regularized Linear Functions Tong Zhang Mathematical Sciences Department IBM T.1. Watson Research Center Yorktown Heights, NY 10598 tzhang@watson.ibm.com Abstract Recently, sample complexity bounds have been derived for problems involving linear functions such as neural networks and support vector machines. In this paper, we extend some theoretical results in this area by deriving dimensional independent covering number bounds for regularized linear functions under certain regularization conditions. We show that such bounds lead to a class of new methods for training linear classifiers with similar theoretical advantages of the support vector machine. Furthermore, we also present a theoretical analysis for these new methods from the asymptotic statistical point of view. This technique provides better description for large sample behaviors of these algorithms. 1 Introduction In this paper, we are interested in the generalization performance of linear classifiers obtained from certain algorithms. From computational learning theory point of view, such performance measurements, or sample complexity bounds, can be described by a quantity called covering number [11, 15, 17], which measures the size of a parametric function family. For two-class classification problem, the covering number can be bounded by a combinatorial quantity called VC-dimension [12, 17]. Following this work, researchers have found other combinatorial quantities (dimensions) useful for bounding the covering numbers. Consequently, the concept of VC-dimension has been generalized to deal with more general problems, for example in [15, 11]. Recently, Vapnik introduced the concept of support vector machine [16] which has been successful applied to many real problems. This method achieves good generalization by restricting the 2-norm of the weights of a separating hyperplane. A similar technique has been investigated by Bartlett [3], where the author studied the performance of neural networks when the I-norm of the weights is bounded. The same idea has also been applied in [13] to explain the effectiveness of the boosting algorithm. In this paper, we will extend their results and emphasize the importance of dimension independence. Specifically, we consider the following form of regularization method (with an emphasis on classification problems) which has been widely studied for regression problems both in statistics and in Convergence of Regularized Linear Functions 371 numerical mathematics: inf Ex yL(w, 2:, y) = inf Ex yl(wT 2:Y) + Ag(W), w I W I (1) where Ex ,y is the expectation over a distribution of (2:, y), and y E {-1, 1} is the binary label of data vector 2:. To apply this fonnulation for the purpose oftraining linear classifiers. we can choose I as a decreasing function, such that I ( .) ~ 0, and choose 9 ( w) ~ 0 as a function that penalizes large w (liIl1w~oo g( w) -4 00). A is an appropriately chosen positive parameter to balance the two tenns. The paper is organized as follows. In Section 2, we briefly review the concept of covering numbers as well as the main results related to analyzing the perfonnance of learning algorithms. In Section 3, we introduce the regularization idea. Our main goal is to construct regularization conditions so that dimension independent bounds on covering numbers can be obtained. Section 4 extends results from the previous section to nonlinear compositions of linear functions. In Section 5. we give an asymptotic fonnula for the generalization perfonnance of a learning algorithm, which will then be used to analyze an instance of SVM. Due to the space limitation, we will only present the main results and discuss their implications. The detailed derivations can be found in [18]. 2 Covering numbers We fonnulate the learning problem as to find a parameter from random observations to minimize risk: given a loss function L( a, x) and n observations Xl = {x 1, ... , xn } independently drawn from a fixed but unknown distribution D, we want to find a that minimizes the expected loss over 2: (risk): R(a) = ExL(a,x)= / L(a,x)dP(x). (2) The most natural method for solving (2) using a limited number of observations is by the empirical risk minimization (ERM) method (cf [15, 16]). We simply choose a parameter a that minimizes the observed risk: 1 n R(a,Xl ) = - LL(a,xi). (3) n i=l We denote the parameter obtained in this way as a erm (Xl)' The convergence behavior of this method can be analyzed by using the VC theoretical point of view. which relies on the unifonn convergence of the empirical risk (the unifonn law of large numbers): SUPa IR(a, Xl) - R(a)l. Such a bound can be obtained from quantities that measure the size of a Glivenko-Cantelli class. For finite number of indices, the family size can be measured simply by its cardinality. For general function families, a well known quantity to measure the degree ofunifonn convergence is the covering number which can be be dated back to Kolmogrov [8, 9]. The idea is to discretize (which can depend on the data Xl) the parameter space into N values a1, ... ,aN SO that each L(a, .) can be approximated by L( ai, .) for some i. We shall only describe a simplified version relevant for our purposes. Definition 2.1 Let B be a metric space with metric p. Given a norm p, observations Xl = [Xl, ... ,xn ]. and vectors I(a, Xl) = [/(a, Xl)"" ,/(a, xn )] E Bn parameterized by a, the covering number in p-norm, denoted as Np (I, €, Xl)' is the minimum number of a collection o/vectors V1, ... ,Vm E B n such that Va. 3Vi: IIp(l(a,Xl),vi)lIp ::; n1/P€. We also denote Np(l, €, n) = maxx~ Np(l, €, Xl). Note that from the definition and the Jensen's inequality, we have Np ::; Nq for p ::; q. We will always assume the metric on R to be IX1 - x21 if not explicitly specified otherwise. The following theorem is due to Pollard [11]: 372 T. Zhang Theorem 2.1 ([11]) \;/n, f > ° and distribution D. -nf2 P(s~p IR(a, X~) - R(a)1 > €j ~ 8E(Af1(L , f/8, X~)] exp( 128M2)' where M = sUPa,:z: L(a, x) - infa,:z: L(a, x). and X~ = {Xl, . .. ,X'l } are independently drawn from D. The constants in the above theorem can be improved for certain problems; see [4. 6, 15, 16] for related results. However, they yield very similar bounds. The result most relevant for this paper is a lemma in [3] where the 1-nonn covering number is replaced by the oo-nonn covering number. The latter can be bounded by a scale-sensitive combinatorial dimension [1], which can be bounded from the I-norm covering number if this covering number does not depend on n. These results can replace Theorem 2.1 to yield better estimates under certain circumstances. Since Bartlett's lemma in [3] is only for binary loss functions, we shall give a generalization so that it is comparable to Theorem 2.1 : Theorem 2.2 Let It and 12 be two functions: R n -+ [0, 1] such that /Y1 - Y21 ~ I implies It (Y1) ~ h(Y2) ~ h(Y1) where h : R n -+ [0,1] is a reference separatingfunction, then -nf2 P[s~p[E:z:It(L(a, x») - Ex-;-h(L(a, x))] > f] ~ 4E[Afoo(L, I, X~)] exp( 32)' Note that in the extreme case that some choice of a achieves perfect generalization: E:z:h(L(a, x)) = 0, and assume that our choices of a(X1) always satisfy the condition EXf h(L( a, x» = 0, then better bounds can be obtained by using a refined version of the Chernoffbound. 3 Covering number bounds for linear systems In this section, we present a few new bounds on covering numbers for the following form of real valued loss functions: d L(w, x) = xT w = L XiWi · (4) i=l As we shall see later, these bounds are relevant to the convergence properties of (1). Note that in order to apply Theorem 2.1, since Afl < Af2 , therefore it is sufficient to estimate Af2(L, €, n) for € > O. It is clear that Af2(L, f, ~ is not finite ifno restrictions on x and w are imposed. Therefore in the following, we will assume that each I/xil/p is bounded. and study conditions ofllw// q so that logAf(j, f, n) is independent or weakly dependent of d. Our first result generalizes a theorem of Bartlett [3]. The original results is with p = 00 and q = 1, and the related technique has also appeared in [10, 13]. The proof uses a lemma that is attributed to Maurey (cf. [2, 7]). Theorem 3.1 V/lxi/lp ~ band Ilw/lq ~ a, where lip + 1/q == 1 and 2 ~ p ~ 00, then a2b2 log2 Af2(L, f, n) ~ r 7 1 Iog2(2d + 1). The above bound on the covering number depends logarithmically on d, which is already quite weak (as compared to linear dependency on d in the standard situation). However, the bound in Theorem 3.1 is nottightforp < 00. For example, the following theorem improves the above bound for p = 2. Our technique of proof relies on the SVD decomposition [5] for matrices, which improves a similar result in [14 J by a logarithmic factor. Convergence of Regularized Linear Functions 373 The next theorem shows that if lip + llq > 1, then the 2-nonn covering number is also mdependent of dimension. Theorem 3.3 Let L(w, x) = xTw. {f'llxillp :::; band Ilwllq :::; a, where 1 :::; q :::; 2 and J = lip + 1jq - 1 > 0, then One consequence of this theorem is a potentially refined explanation for the boosting algorithm. In [13], the boosting algorithm has been analyzed by using a technique related to results in [3] which essentially rely on Theorem 3.1 withp = 00. Unfortunately, the bound contains a logarithmic dependency on d (in the most general case) which does not seem to fully explain the fact that in many cases the perfonnance of the boosting algorithm keeps improving as d increases. However, this seemingly mysterious behavior might be better understood from Theorem 3.3 under the assumption that the data is more restricted than simply being oo-nonn bounded. For example, when the contribution of the wrong predictions is bounded by a constant (or grow very slowly as d increases), then we can regard its p-th nonn bounded for some p < 00 . In this case, Theorem 3.3 implies dimensional independent generalization. If we want to apply Theorem 2.2, then it is necessary to obtain bounds for infinity-nonn covering numbers. The following theorem gives such bounds by using a result from online learning. Theorem 3.4 lfllxillp :::; band Ilwllq :::; a, where 2 :::; p < 00 and lip + 11q = 1, then tiE> O. In the case of p = 00, an entropy condition can be used to obtain dimensional independent covering number bounds. Definition 3.1 Let f1. = [f1.i] be a vector with positive entries such that 11f1.lll = 1 (in this case, we call f1. a distribution vector). Let x = [Xi] "# 0 be a vector of the same length, then we define the weighted relative entropy of x with re5pect to f1. as: ~ IXil entro~(x) = ~ IXil ln J-Lillxlh' • Theorem 3.5 Given a distribution vector f1., If llxilloo :::; band Ilwlll :::; a and entro ~ ( w) :::; c, where we assume that w has non-negative entries, then tiE> 0, 36b2( a2 + ac) log2 Noo(L, E, n) :::; E2 log2[2 r 4ab/ E + 21n + 1]. Theorems in this section can be combined with Theorem 4.1 to fonn more complex covering number bounds for nonlinear compositions oflinear functions. 374 T. Zhang 4 Nonlinear extensions Consider the following system: L([a, w], x) = I(g(a, x) + wTh(a, x)) , (5) where x is the observation, and [a, w] is the parameter. We assume that 1 is a nonlinear function with bounded total variation. Definition 4.1 A/unction 1 : R -+ R is said to satisfy the Lipschitz condition with parameter"Y ifVx, y: I/( x) - I(y) I ~ )'Ix - yl· Definition 4.2 The total variation of a/unction 1 : R -+ R is defined as L TV(f, x) = sup L I/(xi) - I(xi-dl · :2:0<X1 ' <Xl~X t=l We also denote TV(f, (0) as TV(f). Theorem 4.1 .if L([a, w], x) = I(g(a, x) + wT h(a, x)), where TV(f) < 00 and 1 is Lipschitz with parameter),. Assume also that w is a d-dimensional vector and Ilwllq :s; c, then VEl, E2 > 0, and n > 2(d + 1): Iog2 Nr (L, E1 + E2, n) < (d + 1) log2[den max(l TV(f) J, 1)] + log2 Nr([g, h], E2h, n) , + 1 2E1 where the metric o/[g, h) is defined as Ig1 - g21 + cllh1 h211p (l/p + l/q = 1). Example 4.1 Consider classification by hyperplane: L( w, x) = J( wT x < 0) where J is the set indicator function. Let L' ( w, x) = 10 ( wT x) be another loss function where { 1 z < 0 lo(z) = 1 - z z E [0, 1] . o z > 1 Instead of using ERM for estimating parameter that minimizes the risk of L , consider the scheme of minimize empirical risk associated with L', under the assumption that II x 112 :s; b and constraint that JJwl12 :s; a. Denote the estimated parameter by wn . It follows from the covering number bounds and Theorem 2.1 that with probability of at least 1 1]: n 1/ 2ab In( nab + 2) + In 1.. ________ --'-'7 ). n If we apply a slight generalization of Theorem 2.2 and the covering number bound of Theorem 3.4, then with probability of at least 1 T/: 1 a2b2 1 ExJ(w~ x ~ 0) :s; EXfJ(w~ x :s; 2)') + O( -(-2 In(abh + 2) + In n + In -)) n )' T/ for all)' E (0,1]. 0 Bounds given in this paper can be applied to show that under appropriate regularization conditions and assumptions on the data, methods based on (1) lead to generalization performances of the form 0(1/ .jn), where 0 symbol (which is independent of d) is used to indicate that the hidden constant may include a polynomial dependency on Iog( n). It is also important to note that in certain cases, ,\ will not appear (or it has a small influence on the convergence) in the constant of 0, as being demonstrated by the example in the next section. Convergence of Regularized Linear Functions 375 5 Asymptotic analysis The convergence results in the previous sections are in the form of VC style convergence in probability, which has a combinatorial flavor. However, for problems with differentiable function families involving vector parameters, it is often convenient to derive precise asymptotic results using the differential structure. Assume that the parameter a E Rm in (2) is a vector and L is a smooth function. Let a* denote the optimal parameter; "\1 ex denote the derivative with respect to a; and 'It( a, x) denote "\1 exL(a, x) . Assume that V = J "\1 ex'lt(a* , x) dP(x) U = J 'It ( a * , x) 'It ( a * , x f dP ( x) . Then under certain regularity conditions, the asymptotic expected generalization error is given by 1 E R(aerm ) = R(a*) + 2n tr(V-1U). More generally, for any evaluation function h( a) such that "\1 h( a*) = 0: 1 E h(aerm ) I=::j h(a*) + -tr(V- 1"\12h· V-1U), 2n (6) (7) where "\1 2 h is the Hessian matrix of hat a*. Note that this approach assumes that the optimal solution is unique. These results are exact asymptotically and provide better bounds than those from the standard PAC analysis. Example 5.1 We would like to study a form of the support vector machine: Consider L(a, x) = f(aT x) + ~Aa2 , z < 1 z > 1 . Because of the discontinuity in the derivative of f , the asymptotic formula may not hold. However, if we make an assumption on the smoothness of the distribution x, then the expectation of the derivative over x can still be smooth. In this case, the smoothness of f itself is not crucial. Furthermore, in a separate report. we shall illustrate that similar small sample bounds without any assumption on the smoothness of the distribution can be obtained by using techniques related to asymptotic analysis. Consider the optimal parameter a* and letS = {x : a*Tx::; 1}. Note that Aa* = ExEsx, and U = EXES(X - ExEsx)(x - EXEsxf. Assume that 3')' > 0 S.t. P(a*T x ::; ')') = 0, then V = AI + B where B is a positive semi-definite matrix. It follows that E x 2 tr(V-1U) ::; tr(U)jA ::; EXES *T Ila*I I ~::; sup Ilxll~ l la*ll~j')'. xESa X Now, consider an obtained from observations Xl = [Xl, '" ,xn ] by minimizing empirical risk associated with loss function L( a, x), then ExL(a emp , x) ::; inf ExL(a, x) + -2 1 sup I lx l l~lla*ll~ ex ')'n asymptotically. Let A --+ 0, this scheme becomes the optimal separating hyperplane [16]. This asymptotic bound is better than typical PAC bounds with fixed A. 0 Note that although the bound obtained in the above example is very similar to the mistake bound for the perceptron online update algorithm, we may in practice obtain much better estimates from (6) by plugging in the empirical data. 376 T. Zhang References [I] N. Alon, S. Ben-David, N. Cesa-Bianchi, and D. Haussler. Scale-sensitive dimensions, uniform convergence, and learnability. Journal of the ACM, 44(4):615-631, 1997. [2] A.R. Barron. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Injormation Theory, 39(3):930-945, 1993. [3] P.L. Bartlett. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Transactions on Information Theory, 44(2):525-536, 1998. [4] R.M. Dudley. A course on empirical processes, volume 1097 of Lecture Notes in Mathematics. 1984. [5] G.H. Golub and C.P. Van Loan. Matrix computations. Johns Hopkins University Press, Baltimore, MD, third edition, 1996. [6] D. Haussler. Generalizing the PAC model: sample size bounds from metric dimension-based uniform convergence results. In Proc. 30th IEEE Symposium on Foundations of Computer Science, pages 40-45, 1989. [7] Lee K. Jones. A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Ann. Statist., 20(1) : 60~13, 1992. [8] A.N. Kolmogorov. Asymptotic characteristics of some completely bounded metric spaces. Dokl. Akad. Nauk. SSSR, 108:585-589, 1956. [9] A.N. Kolmogorov and Y.M. Tihomirov. f-entropyand f-capacity of sets in functional spaces. Amer. Math. Soc. Trans!., 17(2):277-364,1961. [10] Wee Sun Lee, P.L. Bartlett, and R.C. Williamson. Efficient agnostic learning of neural networks with bounded fan-in. IEEE Transactions on Information Theory, 42(6):2118-2132,1996. [II] D. Pollard. Convergence of stochastic processes. Springer-Verlag, New York, 1984. [12] N. Sauer. On the density of families of sets. Journal of Combinatorial Theory (Series A), 13: 145-147,1972. [13] Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: a new explanation for the effectiveness of voting methods. Ann. Statist., 26(5): 1651-1686,1998. [14] 1. Shawe-Taylor, P.L. Bartlett, R.C. Williamson, and M. Anthony. Structural risk minimization over data-dependent hierarchies. IEEE Trans. In! Theory, 44(5): 19261940, 1998. [15] Y.N. Vapnik. Estimation of dependences based on empirical data. Springer-Verlag, New York, 1982. Translated from the Russian by Samuel Kotz. [16] Y.N. Vapnik. The nature of statistical learning theory. Springer-Verlag, New York, 1995. [17] Y.N. Vapnik and AJ. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and Applications, 16:264-280, 1971. [18] Tong Zhang. Analysis of regularized linear functions for classification problems. Technical Report RC-21572, IBM, 1999. PART IV ALGORITHMS AND ARCHITECTURE
|
1999
|
67
|
1,717
|
Inference for the Generalization Error Claude Nadeau CIRANO 2020, University, Montreal, Qc, Canada, H3A 2A5 jcnadeau@altavista.net Yoshua Bengio CIRANO and Dept. IRO Universite de Montreal Montreal, Qc, Canada, H3C 3J7 bengioy@iro.umontreal.ca Abstract In order to to compare learning algorithms, experimental results reported in the machine learning litterature often use statistical tests of significance. Unfortunately, most of these tests do not take into account the variability due to the choice of training set. We perform a theoretical investigation of the variance of the cross-validation estimate of the generalization error that takes into account the variability due to the choice of training sets. This allows us to propose two new ways to estimate this variance. We show, via simulations, that these new statistics perform well relative to the statistics considered by Dietterich (Dietterich, 1998). 1 Introduction When applying a learning algorithm (or comparing several algorithms), one is typically interested in estimating its generalization error. Its point estimation is rather trivial through cross-validation. Providing a variance estimate of that estimation, so that hypothesis testing and/or confidence intervals are possible, is more difficult, especially, as pointed out in (Hinton et aI., 1995), if one wants to take into account the variability due to the choice of the training sets (Breiman, 1996). A notable effort in that direction is Dietterich's work (Dietterich, 1998). Careful investigation of the variance to be estimated allows us to provide new variance estimates, which tum out to perform well. Let us first layout the framework in which we shall work. We assume that data are available in the form Zjl = {Z 1, ... , Zn}. For example, in the case of supervised learning, Zi = (Xi,}Ii) E Z ~ RP+q, where p and q denote the dimensions of the X/s (inputs) and the }Ii's (outputs). We also assume that the Zi'S are independent with Zi rv P(Z). Let £(D; Z), where D represents a subset of size nl ::; n taken from Zjl, be a function Znl X Z -t R For instance, this function could be the loss incurred by the decision that a learning algorithm trained on D makes on a new example Z. We are interested in estimating nJ.l. == E[£(Zjl; Zn+1)] where Zn+1 rv P(Z) is independent of Zjl. Subscript n stands for the size of the training set (Zjl here). The above expectation is taken over Zjl and Zn+1, meaning that we are interested in the performance of an algorithm rather than the performance of the specific decision function it yields on the data at hand. According to Dietterich's taxonomy (Dietterich, 1998), we deal with problems of type 5 through 8, (evaluating learning algorithms) rather then type 1 through 4 (evaluating decision functions). We call nJ.l. the generalization error even though it can also represent an error difference: • Generalization error We may take £(D; Z) = £(D; (X, Y)) = Q(F(D)(X), Y), (1) 308 C. Nadeau and Y. Bengio where F(D) (F(D) : ]RP ~ ]Rq) is the decision function obtained when training an algorithm on D, and Q is a loss function measuring the inaccuracy of a decision. For instance, we could have Q(f), y) = I[f) 1= y], where I[ ] is the indicator function, for classification problems and Q(f), y) =11 f) - y 11 2 , where is II . II is the Euclidean norm, for "regression" problems. In that case nJ.L is what most people call the generalization error. • Comparison of generalization errors Sometimes, we are not interested in the performance of algorithms per se, but instead in how two algorithms compare with each other. In that case we may want to consider .cCDi Z) = .c(Di (X, Y)) = Q(FA(D)CX), Y) - Q(FB(D)(X), Y), (2) where FA(D) and FB(D) are decision functions obtained when training two algorithms (A and B) on D, and Q is a loss function. In this case nJ.L would be a difference of generalization errors as outlined in the previous example. The generalization error is often estimated via some form of cross-validation. Since there are various versions of the latter, we layout the specific form we use in this paper. • Let Sj be a random set of nl distinct integers from {I, ... , n }(nl < n). Here nl represents the size of the training set and we shall let n2 = n - nl be the size of the test set. • Let SI, ... SJ be independent such random sets, and let Sj = {I, ... , n} \ Sj denote the complement of Sj. • Let Z Sj = {Zi Ii E Sj} be the training set obtained by subsampling Zr according to the random index set Sj. The corresponding test set is ZSj = {Zili E Sj}. • Let L(j, i) = .c(Zs;; Zi). According to (1), this could be the error an algorithm trained on the training set ZSj makes on example Zi. According to (2), this could be the difference of such errors for two different algorithms. • Let (1,j = k 2:~=1 L(j, i{) where i{, ... ,i'k are randomly and independently drawn from Sj. Here we draw K examples from the test set ZS'j with replacement and compute the average error committed. The notation does not convey the fact that {1,j depends on K, nl and n2 . • Let {1,j = limK ..... oo (1,j = ';2 2:iES~ L(j, i) denote what {1,j becomes as K increases J without bounds. Indeed, when sampling infinitely often from ZS'j' each Zi (i E Sj) is chosen with relative frequency .l.., yielding the usual "average test error". The use of K is n2 just a mathematical device to make the test examples sampled independently from Sj. Then the cross-validation estimate of the generalization error considered in this paper is J n2 ~ K _ I '""' ~ nl J.LJ - J L.J J.Lj. j=1 We note that this an unbiased estimator of nlJ.L = E[.c{Zfl, Zn+r)] (not the same as nJ.L). This paper is about the estimation of the variance of ~~ {1,~. We first study theoretically this variance in section 2, leading to two new variance estimators developped in section 3. Section 4 shows part of a simulation study we performed to see how the proposed statistics behave compared to statistics already in use. 2 Analysis of Var[ ~~itr] Here we study Var[ ~~ {1,~]. This is important to understand why some inference procedures about nl J.L presently in use are inadequate, as we shall underline in section 4. This investigation also enables us to develop estimators of Var[ ~~ {1,~] in section 3. Before we proceed, we state the following useful lemma, proved in (Nadeau and Bengio, 1999). Inference for the Generalization Error 309 Lemma 1 Let U 1, ... , Uk be random variables with common mean (3, common variance 6 and Cov[Ui , Uj] = "I, Vi '# j. Let1r = J be the correlation between Ui and Uj (i '# j). Let U = k- 1 2::=1 Ui and 8b = k~1 2::=1 (Ui - U)2 be the sample mean and sample variance respectively. Then E[8b] = 6 - "I and Var[U] = "I + (6~'Y) = 6 (11" + lk1l') . To study Var[ ~i j1,~] we need to define the following covariances. • Let lio = liO(nl) = Var[L(j, i)] when i is randomly drawn from 8J. • Let lil = lil (nl, n2) = Cov[L(j, i), L(j, i')] for i and i' randomly and independently drawn from 8j. • Let li2 = liZ(nl, n2) = Cov[L(j, i), L(j', i')], with j '# j', i and i' randomly and independently drawn from 8j and 8jl respectively. • Let li3 = li3(nl) = Cov[L(j, i), L(j, i')] for i, i' E 8j and i '# i'. This is not the same as lil. In fact, it may be shown that . C [L( ") L(' ")] lio + (nz - 1)li3 _ + lio li3 (3) lil OV), z, ), z li3 . nz nz nz Let us look at the mean and variance of j1,j and ~i j1,~. Concerning expectations, we obviously have E[j1,j] = n1f.£ and thus E[ ~ij1,~] = n1f.£. From Lemma 1, we have Var[j1,j] = lil + O'°KO'I which implies Var[j1,j] = Var[ lim j1,j] = lim Var[j1,j] = lil. K-too K-too It can also be shown that Cov[j1,j, j1,j'] = liZ, j '# j', and therefore (using Lemma 1) TT [~] + 0'0-0'1 TT [n2 ~K] _ var f.£j liZ _ lil K liZ (4) var n1f.£J -liz+ J -liZ+ J . We shall often encounter liO, lil, liZ, li3 in the future, so some knowledge about those quantities is valuable. Here's what we can say about them. Proposition 1 For given nl and n2, we have 0 ~ liz ~ lil ~ lio and 0 ~ li3 ~ lil. Proof See (Nadeau and Bengio, 1999). A natural question about the estimator ~i j1,~ is how nl, nz, K and J affect its variance. Proposition 2 The variance of ~i j1,~ is non-increasing in J, K and nz. Proof See (Nadeau and Bengio, 1999). Clearly, increasing K leads to smaller variance because the noise introduced by sampling with replacement from the test set disappears when this is done over and over again. Also, averaging over many trainltest (increasing J) improves the estimation of nl f.£. Finally, all things equal elsewhere (nl fixed among other things), the larger the size of the test sets, the better the estimation of nl f.£. The behavior of Var[ ~i j1,~] with respect to nl is unclear, but we conjecture that in most situations it should decrease in nl. Our argument goes like this. The variability in ~i j1,~ comes from two sources: sampling decision rules (training process) and sampling testing examples. Holding n2, J and K fixed freezes the second source of variation as it solely depends on those three quantities, not nl. The problem to solve becomes: how does nl affect the first source of variation? It is not unreasonable to say that the decision function yielded by a learning algorithm is less variable when the training set is large. We conclude that the first source of variation, and thus the total variation (that is Var[ ~ij1,~]) is decreasing in nl. We advocate the use of the estimator (5) 310 C. Nadeau and Y Bengio as it is easier to compute and has smaller variance than ~~it} (J, nl, n2 held constant). Var[ n2 11 00 ] = lim Var[ n2 rl.K] = (72 + (71 (72 (7 (p + 1 - p) (6) nl,-J K-+oo nl,-J J 1 -J-' where P ~ - Corr[ll.oo r/OO] 111 '-j , '-j' . 3 Estimation of Var[ ~~JtJ] We are interested in estimating ~~(7J == Var[ ~~ it:f] where ~~ it:f is as defined in (5). We provide two different estimators of Var[ ~~ it:f]. The first is simple but may have a positive or negative bias for the actual variance. The second is meant to be conservative, that is, if our conjecture of the previous section is correct, its expected value exceeds the actual variance. 1st Method: Corrected Resampled t-Test. Let us recall that ~~ it:f = J 'Ef=1 itj. Let jj2 be the sample variance of the itj's. According to Lemma 1, I-p ( I-P) (71 (p+!=£) E[jj21=(71(1-p)= !=£(71 p+~ = 1. ~ P + J J + I-p Var[ ~~it:f] l+--L ' J I-p (7) so that (J + G) jj2 is an unbiased estimator of Var[ ~~ iL:f]. The only problem is that p = p(nl,n2) = :~t~:::~~, the correlation between the itj's, is unknown and difficult to estimate. We use a naive surrogate for p as follows. Let us recall that iLj = :2 'EiES~ £(ZSj; Zi). For the purpose of building our estimator, let us make the , approximation that £(ZSj; Zi) depends only on Zi and nl. Then it is not hard to show (see (Nadeau and Bengio, 1999)) that the correlation between the itj's becomes nl~n2' Therefore our first estimator of Var[~~iL:fl is (J + l~~o) jj2 where Po = po(nl,n2) = nl~n2' that is (J + ~ ) jj2. This will tend to overestimate or underestimate Var[ ~~ iL:f] according to whether Po > p or Po < p. Note that this first method basically does not require any more computations than that already performed to estimate generalization error by cross-validation. 2nd Method: Conservative Z. Our second method aims at overestimating Var[ ~~ iL:f] which will lead to conservative inference, that is tests of hypothesis with actual size less than the nominal size. This is important because techniques currently in use have the opposite defect, that is they tend to be liberal (tests with actual size exceeding the nominal size), which is typically regarded as less desirable than conservative tests. Estimating ~~ (7J unbiasedly is not trivial as hinted above. However we may estimate unbiasedly nn? (7J = Var[ nn? it:fl where n~ = L!!2 J - n2 < nl. Let n? uJ be the unbiased 1 1 n 1 estimator, developed below, of the above variance. We argued in the previous section that Var[ ~~ it:fl ~ Var[ ~~ iL:fl. Therefore ~;uJ will tend to overestimate ~~(7J, that is E[ n2a-2] = n2(72 > n2(72 n; J n; J nl J' Here's how we may estimate ~? (7J without bias. For simplicity, assume that n is even. 1 We have to randomly split our data Zr into two distinct data sets, Dl and D1, of size ~ each. Let iL(1) be the statistic of interest ( ~; iL:f) computed on D1 . This involves, among other things, drawing J train/test subsets from DI . Let iL(l) be the statistic computed on D1· Then iL(l) and iL(l) are independent since Dl and Dl are independent data sets, so h ( A it(I)+it(1))2 (AC it(!)+it(I))2 I(A AC)2' b' ed . t at /-L(l) 2 + J.L(I) 2 = 2" /-L(l) /-L(l) IS an un las estImate of ~?(7J. This splitting process may be repeated M times. This yields Dm and D~, with 1 Inference for the Generalization Error 311 Dm U D~ = zf, Dm n D~ = 0 for m = 1, ... , M. Each split yields a pair (it(m) , it(m») that is such that ~(it(m) - it(m»)2 is unbiased for ~~U}. This allows us to use the following unbiased estimator of ~? U}: 1 M n2 ~ 2 _ 1 ""' (~ ~ c )2 n~ U J 2M L..J J-t(m) J-t(m) . m=1 (8) Note that, according to Lemma 1, Var[ ~~oj] = t Var[(it(m) - it(m»)2] (r + IMr) with r = Corr[(it(i) - it(i»)2, (it(j) - it(j»)2] for i i- j. Simulations suggest that r is usually close to 0, so that the above variance decreases roughly like k for M up to 20, say. The second method is therefore a bit more computation intensive, since requires to perform cross-validation M times, but it is expected to be conservative. 4 Simulation study We consider five different test statistics for the hypothesis Ho : niJ-t = J-to. The first three are methods already in use in the machine learning community, the last two are the new methods we put forward. They all have the following form reject Ho if I it ~J-to I > c. (9) Table 1 describes what they are 1. We performed a simulation study to investigate the size (probability of rejecting the null hypothesis when it is true) and the power (probability of rejecting the null hypothesis when it is false) of the five test statistics shown in Table 1. We consider the problem of estimating generalization errors in the Letter Recognition classification problem (available from www. ics. uci . edu/pub/machine-learning-databases). The learning algorithms are 1. Classification tree We used the function tree in Splus version 4.5 for Windows. The default arguments were used and no pruning was performed. The function predict with option type="class" was used to retrieve the decision function of the tree: FA (Zs)(X). Here the classification loss function LAU,i) = I[FA(Zsj)(Xi ) i- Yi ] is equal to 1 whenever this algorithm misclassifies example i when the training set is Sj; otherwise it is O. 2. First nearest neighbor We apply the first nearest neighbor rule with a distorted distance metric to pun down the performance of this algorithm to the level of the classification tree (as in (Dietterich, 1998». We have LBU, i) equal to 1 whenever this algorithm misclassifies example i when the training set is Sj; otherwise it is O. In addition to inference about the generalization errors ni J-tA and ni J-tB associated with those two algorithms, we also consider inference about niJ-tA-B = niJ-tA niJ-tB = E[LA-B(j,i)] whereLA_B(j,i) = LAU,i) - LB(j,i). We sample, without replacement, 300 examples from the 20000 examples available in the Letter Recognition data base. Repeating this 500 times, we obtain 500 sets of data of the form {ZI,"" Z300}. Once a data set zloO = {ZI,'" Z300} has been generated, we may lWhen comparing two classifiers, (Nadeau and Bengio, 1999) show that the t-test is closely related to McNemar's test described in (Dietterich, 1998). The 5 x 2 cv procedure was developed in (Dietterich, 1998) with solely the comparison of classifiers in mind but may trivially be extended to other problems as shown in (Nadeau and Bengio, 1999). 312 C. Nadeau and Y. Bengio II Name II c " t-test (McNemar) n2 AOO nl/-Ll ~2 SV(L(I, i)) tn2- 1,1-ar/2 n2IT3+~ITO-IT3) > 1 ITn -IT3 resampled t n2 AOO nl/-LJ yO-:.l tJ - 1,1-ar/2 I+J~ > 1 Dietterich's 5 x 2 cv n(2 AOO n/2/-Ll see (Dietterich, 1998) tS,1-ar/2 ? 1: conservative Z n2 AOO n2 A2 Zl-ar/2 ~~IT? < 1 nl/-LJ n'UJ 1 n,uJ 2: corr. resampled t n2 AOO (!. + ~) 0-2 tJ-l,1-ar/2 l+JE nl /-L J J nl l+J~ Table 1: Description of five test statistics in relation to the rejection criteria shown in (9). Zp and h,p refer to the quantile p of the N(O, 1) and Student tk distribution respectively. 0-2 is as defined above (7) and SV (L(I, i)) is the sample variance of the L(I, i)'s involved in ~i {l{'. The ~t~~l ratio (which comes from proper application of Lemma 1, except for Dietterich's 5 x 2 cv and the Conservative Z) indicates if a test will tend to be conservative (ratio less than 1) or liberal (ratio greater than 1). perform hypothesis testing based on the statistics shown in Table 1. A difficulty arises however. For a given n (n = 300 here), those methods don't aim at inference for the same generalization error. For instance, Dietterich's 5 x 2 cv test aims at n/2/-L, while the others aim at nl/-L where nl would usually be different for different methods (e.g. nl = 23n for the t test statistic, and nl = ~~ for the resampled t test statistic, for instance). In order to compare the different techniques, for a given n, we shall always aim at n/2/-L, i.e. use nl = ¥-. However, for statistics involving ~ip.r with J > 1, normal usage would call for nl to be 5 or 10 times larger than n2, not nl = n2 = ¥-. Therefore, for those statistics, we also use nl = ¥- and n2 = l~ so that ~ = 5. To obtain ~~;o p.r we simply throw out 40% of the data. For the conservative Z, we do the variance calculation as we would normally do (n2 = l~ for instance) to obtain ~i2-n2a-J = ;~~~a-J. However, in the numerator we b h n/2AOO d n2 AOO n/lOAoo' d f n2 AOO I' db compute ot n/2/-LJ an n/2/-LJ n/2 /-LJ mstea 0 n-n2/-LJ' as exp rune a ove. Note that the rationale that led to the conservative Z statistics is maintained, that is ;~~~a-J . b h TT [n/lOAOO] d TT [n/2A OO] E [n/lOA2] > TT [n/lOAOO] > overestimates ot var n/2 /-LJ an var n/2/-LJ: 2n/su J _ var n/2 /-LJ TT [n/2 A 00] var n/2/-LJ . Figure 1 shows the estimated power of different statistics when we are interested in /-LA and /-LA-B. We estimate powers by computing the proportion of rejections of Ho . We see that tests based on the t-test or resampled t-test are liberal, they reject the null hypothesis with probability greater than the prescribed a = 0.1, when the null hypothesis is true. The other tests appear to have sizes that are either not significantly larger the 10% or barely so. Note that Dietterich's 5 x 2cv is not very powerful (note that its curve has the lowest power on the extreme values of muo). To make a fair comparison of power between two curves, one should mentally align the size (bottom of the curve) of these two curves. Indeed, even the resampled t-test and the conservative Z that throw out 40% of the data are more powerful. That is of course due to the fact that the 5 x 2 cv method uses J = 1 instead of J = 15. This is just a glimpse of a much larger simulation study. When studying the corrected resampled t-test and the conservative Z in their natural habitat (nl = ~9 and n2 = l~)' we see that they are usually either right on the money in term of size, or slIghtly conservative. Their powers appear equivalent. The simulations were performed with J up to 25 and M up to 20. We found that taking J greater than 15 did not improve much the power of the Inference for the Generalization Error 313 Figure 1: Powers of the tests about Ho : /-LA = /-Lo (left panel) and Ho : /-LA-B = /-Lo (right panel) at level a = 0.1 for varying /-Lo. The dotted vertical lines correspond to the 95% confidence interval for the actual/-LA or /-LA-B. therefore that is where the actual size of the tests may be read. The solid horizontal line displays the nominal size of the tests. i.e. 10%. Estimated probabilities of rejection laying above the dotted horizontal line are significatively greater than 10% (at significance level 5%). Solid curves either correspond to the resampled t-test or the corrected resampled t-test. The resampled t-test is the one that has ridiculously high size. Curves with circled points are the versions of the ordinary and corrected resampled t-test and conservative Z with 40% of the data thrown away. Where it matters J = 15. M = 10 were used. statistics. Taking M = 20 instead of M = 10 does not lead to any noticeable difference in the distribution of the conservative Z. Taking M = 5 makes the statistic slightly less conservative. See (Nadeau and Bengio. 1999) for further details. 5 Conclusion This paper addresses a very important practical issue in the empirical validation of new machine learning algorithms: how to decide whether one algorithm is significantly better than another one. We argue that it is important to take into account the variability due to the choice of training set. (Dietterich. 1998) had already proposed a statistic for this purpose. We have constructed two new variance estimates of the cross-validation estimator of the generalization error. These enable one to construct tests of hypothesis and confidence intervals that are seldom liberal. Furthermore. tests based on these have powers that are unmatched by any known techniques with comparable size. One of them (corrected resampled t-test) can be computed without any additional cost to the usual K-fold crossvalidation estimates. The other one (conservative Z) requires M times more computation. where we found sufficiently good values of M to be between 5 and 10. References Breiman, L. (1996). Heuristics of instability and stabilization in model selection. Annals of Statistics, 24 (6):2350-2383. Dietterich, T. (1998). Approximate statistical tests for comparing supervised classification learning algorithms. Neural Computation, 10 (7):1895-1924. Hinton. G., Neal. R., Tibshirani, R., and DELVE team members (1995). Assessing learning procedures using DELVE. Technical report, University of Toronto, Department of Computer Science. Nadeau, C. and Bengio, Y. (1999). Inference for the generalisation error. Technical Report in preparation, CIRANO.
|
1999
|
68
|
1,718
|
LTD Facilitates Learning In a Noisy Environment Paul Munro School of Information Sciences University of Pittsburgh Pittsburgh PA 15260 pwm+@pitt.edu Abstract Gerardina Hernandez Intelligent Systems Program University of Pittsburgh Pittsburgh PA 15260 gehst5+@pitt.edu Long-term potentiation (LTP) has long been held as a biological substrate for associative learning. Recently, evidence has emerged that long-term depression (LTD) results when the presynaptic cell fires after the postsynaptic cell. The computational utility of LTD is explored here. Synaptic modification kernels for both LTP and LTD have been proposed by other laboratories based studies of one postsynaptic unit. Here, the interaction between time-dependent LTP and LTD is studied in small networks. 1 Introduction Long term potentiation (LTP) is a neurophysiological phenomenon observed under laboratory conditions in which two neurons or neural populations are stimulated at a high frequency with a resulting measurable increase in synaptic efficacy between them that lasts for several hours or days [1]-[2] LTP thus provides direct evidence supporting the neurophysiological hypothesis articulated by Hebb [3]. This increase in synaptic strength must be countered by a mechanism for weakening the synapse [4]. The biological correlate, long-term depression (LTD) has also been observed in the laboratory; that is, synapses are observed to weaken when low presynaptic activity coincides with high postsynaptic activity [5]-[6]. Mathematical formulations of Hebbian learning produce weights, Wi}, (where i is the presynaptic unit and j is the postsynaptic unit), that capture the covariance [Eq. 1] between the instantaneous activities of pairs of units, ai and aj [7]. wij(t) = (ai (t) -ai)(a j (t)-aj) [1] This idea has been generalized to capture covariance between acUvlUes that are shifted in time [8]-[9], resulting in a framework that can model systems with temporal delays and dependencies [Eq. 2]. Wij(t) = ffK(t"-t')ai (tA')aj (t')dt'dt' [2] LTD Facilitates Learning in a Noisy Environment 151 As will be shown in the following sections, depending on the choice of the function K(L1t), this formulation encompasses a broad range of learning rules [10]-[12] and can support a comparably broad range of biological evidence. Figure 1. Synaptic change as a function of the time difference between spikes from the presynaptic neuron and the postsynaptic neuron. Note that for tpre < tpos t , LTP results (L1w > 0), and for tpre > tpost , the result is LTD. Recent biological data from [13]-[15], indicates an increase in synaptic strength (LTP) when presynaptic activity precedes postsynaptic activity, and LTD in the reverse case (postsynaptic precedes presynaptic). These ideas have started to appear in some theoretical models of neural computation [10]-[12], [16]-[18]. Thus, Figure 1 shows the form of the dependence of synaptic change, Liw on the difference in spike arrival times. 2 A General Framework Given specific assumptions, the integral in Eq. 2 can be separated into two integrals, one representing LTP and one representing LTD [Eq. 3]. t t Wij(t) = f Kp(t-t')ai(t')a/)dt' + f KD(t-t')ai(t)a/t')dt' {' = -00 v ' r~'_=_-_oo __ ~v~ ____ ~ [3] L~ LID The activities that do not depend on t' can be factored out of the integrals, giving two Hebb-like products, between the instantaneous activity in one cell and a weighted time average of the activity in the other [Eq. 4]: wij (t) = (ai (t)) p a j (t) - ai (t)( a j (t)) D t where (f(t)) X :; I ,f K X (t - t')f(t')dt' I for X E {P, D} t =-00 [4] The kernel functions Kp and KD can be chosen to select precise times out of the convoluted function fit), or to average across the functions for an arbitrary range. The alpha function is useful here [Eq. 5]. A high value of a selects an immediate time, while a small value approximates a longer time-average. -a 'r Kx (r) = fixu X for X E {P,D} [5] with ap > O,aD > O,fip > O,fiD < ° 152 P. W. Munro and G. Hernandez For high values of ap and QD, only pre- and post- synaptic activities that are very close temporally will interact to modify the synapse. In a simulation with discrete step sizes, this can be reasonably approximated by only considering just a single time step [Eq. 6]. dWjj (t) = aj(t-l)a j(t)-aj (t)a j (t-1) [6] Summing LiWi,{t) and Liwiit+l) gives a net change in the weights Li(2)Wij = wiiH1)-Wi/t-l) over the two time steps: [7) The first tenn is predictive in that it has the fonn of the delta rule where a/HI) acts as a training signal for aj (t-1), as in a temporal Hopfield network [9]. 3 Temporal Contrast Enhancement The computational role of the LTP term in Eq. 3 is well established, but how does the second term contribute? A possibility is that the term is analogous to lateral inhibition in the temporal domain; that is, that by suppressing associations in the "wrong" temporal direction, the system may be more robust against noise in the input. The resulting system may be able to detect the onset and offset of a signal more reliably than a system not using an anti-Hebbian LTD tenn. The extent to which the LTD term is able to enhance temporal contrast is likely to depend idiosyncratically on the statistical qualities of a particular system. If so, the parameters of the system might only be valid for signals with specific statistical properties, or the parameters might be adaptive. Either of these possibilities lies beyond the scope of analysis for this paper. 4 Simulations Two preliminary simulation studies illustrate the use of the learning rule for predictive behavior and for temporal contrast enhancement. For every simulation, kernel functions were specified by the parameters a and p, and the number of time steps, np and nD, that were sampled for the approximation of each integral. 4.1 Task 1. A Sequential Shifter The first task is a simple shifter over a set of 7 to 20 units. The system is trained on these stimuli and then tested to see if it can reconstruct the sequence given the initial input. The task is given with no noise and with temporal noise (see Figure 2). Task 1 is designed to examine the utility of LTD as an approach to learning a sequence with temporal noise. The ability of the network to reconstruct the noise-free sequence after training on the noisy sequence was tested for different LTD kernel functions. Note that the same patterns are presented (for each time slice, just one of the n units is active), but the shifts either skip or repeat in time. Experiments were run with k = 1, 2, or 3 of the units active. 4.2 Task 2. Time series reconstruction. In this task, a set of units was trained on external sinusoidal signals that varied according to frequency and phase. The purpose of this task is to examine the role of LTD in providing temporal context. The network was then tested under a condition in which the LTD Facilitates Learning in a Noisy Environment 153 external signals were provided to all but one of the units. The activity of the deprived unit was then compared with its training signal T m e Sequence Clean aaaaaaa 12::1 4 ~ 6 7 _ClClClaaa CI_aaaaa Cla_aaaa ClCla_aaa aClaa_aa aClaaa_a aaaaaa_ _ aaaaaa a_aaaaa aa_aaaa aaa_aaa aaaa_aa aaaaa_a aaaaaa_ Noisy aaaaaaa 12.'14~67 _aaaaaa a_aaaaa aa_aaaa Cla_aaaa aaa_aaa aaaa_aa Claaaaa_ aaaaaa _ _aaaaaa a_aaaaa aaa_aaa aaaa_aa aaaa_aa aaaaa_a Reconstruction LTP alone a a a a a a a 12::14~67 _CICICICICICI CI ___ CICICI ______ CI ------------------------------------------------------------------LTP<D aaaaaaa 1 2 ::I 4 ~ 6 7 _aaaaaa CI_aaaaa aa_aaaa aaa_aal:l aaaa_al:l aaaaa_1:I aaaaaa_ _aal:laaa a_aaaaa aa_aaaa aaa_al:la aaaa_aa aaaaa_a l:Iaaaaa_ Figure 2. Reconstruction of clean shifter sequence using as input the noisy stimulus shifter sequence. For each time slice, just one of the 7 units is active. In the clean sequence, activity shifts cyclically around the 7 units. The noisy sequence has a random jitter of ±1. 5 ResultsSequential Shifter Results All networks trained on the clean sequence can learn the task with LTP alone, but no networks could learn the shifter task based on a noisy training sequence unless there was also an LTD term. Without an LTD term, most units would saturate to maximum values. For a range of LTD parameters, the network would converge without saturating. Reconstruction performance was found to be sensitive to the LTD parameters. The parameters a and f3 shown in Table.l needed to be chosen very specifically to get perfect reconstruction (this was done by trial and error). For a narrow range of parameters near the optimal values, the reconstructed sequence was close to the noise-free target. However, the parameters a and f3 shown in Table 2 are estimated from the experimental result of Zhang,et al [15]. T bl 1 R a e fth esu ts 0 . I h·f k e sequentla s Iter tas . k n nr ap f3p np aD f3D nD Time 1 7 1 1 2.72 1 0.1 0.4 5 208 2 7 1 1 2.72 1 0.1 0.4 4 40 2 0.5 0.4 3 0.2 0.1 7 192 ~ 7 1 0.5 0.4 1 0.2 0.1 6 168 1 10 1 1 2.72 1 0.1 0.4 8 682 12. 10 1 1 2.72 1 0.1 0.4 7 99 1 15 1 1 2.72 1 0.1 0.4 13 1136 1 20 1 1 2.72 1 0.1 0.4 18 4000 The task was to shift a pattern 1 unit with each time step. A block of k of n units was active. The parameters of the kernel functions (aand /3), the number of values sampled from the kernel (the number of time slices used to estimate the integral), np and nD, and the number of steps used to begin the reconstruction, nr (usually n, = 1) are given in the table. The last column of the table (Time) reports the number of iterations required for perfect reconstruction. 154 P. W. Munro and G. Hernandez Table 2 .. Results of the sequential shifter task using as parameters: nr =1; np =1; p=O.5; fJI =-aD *e*O.35; aD =0.125; a fJp=ap *e*O.8. ~ n nD Time 1 7 6 288 ~ 7 5 96 ~ 7 4 64 For the above results, the k active units were always adjacent with respect to the shifting direction. For cases with noncontiguous active units, reconstruction was never exact. Networks trained with LTP alone would saturate, but would converge to a sequence "close" to the target (Fig. 3) if an LTD term was added. Sequence Clean a G a a a a a 121 4 !'i 6 7 Noisy a a a G G a a 1214!'i67 • a.aaaa • a.aaaa aa.a.aa a.a.aaa aaaa.a. aaaa.a. • aaaa.a a.aaaa. Reconstruction LTP alone a G G a a a a 12:>4!'i67 a.aaaa • ••••••• ••••••• ••••••• ••••••• ••••••• ••••••• • •••••• LTP<D a a a a a a a 12.0 4!'i67 a.aaaa • .a.aaaa •••• aaa aa •••• a aaa •••• .aaa ••• a.aaa •• • •• aaa. Figure 3. This base pattern (k=2, n=7) with noncontiguous active units was presented as a shifted sequence with noise. The target sequence is partially reconstructed only when LTP and LTD are used together. 5.1 Time Series Reconstruction Results A network of just four units was trained for hundreds of iterations, the units were each externally driven by a sinusoidally varying input. Networks trained with LTP alone fail to reconstruct the time series on units deprived of external input during testing. In these simulations, there is no noise in the patterns, but L TO is shown to be necessary for reconstruction of the patterns (Fig. 4). I'~. !', f: ..... ' ,: :'I ~ I I • : ~ • II .' ", •• : = . IJ :. : I I . . . ~ : .:1 I. : I 'I ~. :: • I I.:': I I II ~. l: ~I II '"I ~, ~ .'~ ........ ~! ......... 'll ......... .,,: ........ :.;. ... u ••• \.: • • •••• • • ~,: •• • •• • ~ ····· · ·,·y ·· ·····rl:····· · · ·.~······ ··::. · · · ····"!I · ···· · ·· ~~··· ·· ·· ·~I I I I : \ • " I I : ~J .; • • ! i Figure 4. Reconstruction of sinusoids. Target signals , from training (dashed) plotted with reconstructed signals (solid). Left: The best reconstruction using LTP alone. Right: A typical result with LTP and LTD together. For high values of ap and aD, the reconstruction of sinusoids is very sensitive to the values of fJD and fJP. Figure 5 shows the results when IfJD I and fJp values are close. In the first case (top), when IfJD I is slightly smaller than f3p , the first two neurons (from left to right) saturate. And, in the contrary case (bottom) the first two neurons LTD Facilitates Learning in a Noisy Environment 155 show almost null activation. However, the third and fourth neurons (from left to right) in both cases (top and bottom) show predictive behavior. :~ ~ ~ ~ :~ ; '\ I: !I ='\ II ( :: ;~ :: ,::, :'~ ~ 1 I : I :: ::. : ~ :: :: i .' II .' I' : . :' , , ' , . ! I I: I: I: " " I. " I. " I. I' I • , , , : : " " " " " , \ . . . , , • I i c • • I' • '\ I,. .. I ~ .: " I :: • • I, '. • , I .: I, : I :. ~ • • I : : ;: • • • • • I I , , , , , , , :' , " , ; i ~!:: .. :~::;!;~~~ ., , , " , ' , , : t , , , c , , ;: ;: ; ~ .' .' I II'! • ' l' • ." I: I: It. I II It I. : ~: ~: ~: ~: ~r .••. I1 .......... A: ••••••• \ .••• -....4: .... . ... l ...• I \. ' . I I .. :: I: ~ : :: ~: .' : .! :: :: :: ~.I :: ~; ~I·~ ~: :. :. .' 0' .; • :.: II • , I .' " '. \ I! •• ~ I,' I. .: ~I .. ~ ...•.•••• \ .••.••.•... 'J. .•.••...•.•..... !C, : : " " , .. I, II I:. , " . " , , . , " :. . :: \ ~ ."+ •• _.-. .... _~-. •• :.-.-.~ . . ......... 0"<". I ._._~ .... ...? ...... j • , I , . : , , , , , ..... ... ...... .................. . , j ~ ~ ,1 · ·· ···~······:t···· ··ft··u .. ~ ...... ~ ' 1"'~ " '" , {··~··l· · ·· : "};' ' .~ , .. , .. . . ~: • It .. I. I r: . . . . .~ I I I • I I I I { : I : I :. : ~ ; I I: :: t ·; , I :' ~ I I • I I I I I I : • • I • I • I. • ~ I I " I. " I: I: I: I I I: : :: ~I ~I ~I ~I ,I ; • I • I • I • I ., I: ~! .~! .~ : .~! 't! j .. • '\ • It ' I I ': '. ' I I ... :. : I 'I : I, I I I I • I ,; _: I I' . : :: .' I:: I ::,:J ~:; :,= ~~'~ , • I : " p "; • 'I:': I I I • I • I I,: ~ • ,: • " I " • I • : I : I : I .: t _. .. t . . . . ,. . . . . t.. .. . Figure 5. Reconstruction of sinusoids . Examples of target signals from training (dashed) plotted with reconstructed signals (solid). Top: When IfJD kfJp. Bottom: When IfJDI> fJp . 6 Discussion In the half century that has elapsed since Hebb articulated his neurophysiological postulate, the neuroscience community has corne to recognize its fundamental role in plasticity. Hebb's hypothesis clearly transcends its original motivation to give a neurophysiologically based account of associative memory. The phenomenon of LTP provides direct biological support for Hebb's postulate, and hence has clear cognitive implications. Initially after its discovery in the laboratory, the computational role of LTD was thought to be the flip side of LTP. This interpretation would have synapses strengthen when activities are correlated and have them weaken when they are anti-correlated. Such a theory is appealing for its elegance, and has formed the basis many network models [19]-[20]. However, the dependence of synaptic change on the relative timing of pre- and post- synaptic activity that has recently been shown in the laboratory is inconsistent with this story and calls for a computational interpretation. A network trained with such a learning rule cannot converge to a state where the weights are symmetric, for example, since LiWij 7:- LiWji. While the simulations reported here are simple and preliminary, they illustrate two tasks that benefit from the inclusion of time-dependent LTD. In the case of the sequential shifter, an examination of more complex predictive tasks is planned in the near future. It is expected that this will require architectures with unclamped (hidden) units. The role of LTD here is to temporally enhance contrast, in a way analogous to the role of lateral inhibition for computing spatial contrast enhancement in the retina. The time-series example illustrates the possible role of LTD for providing temporal context. J 56 P. W. Munro and G. Hernandez 7 References [1] Bliss TVP & Lcllmo T (1973) Long-lasting potentiation of synaptic in the dentate area of the un anaesthetized rabbit following stimulation of the perforant path.J Physiol 232:331-356 [2] Malenka RC (1995) LTP and LTD: dynamic and interactive processes of synaptic plasticity. The Neuroscientist 1:35-42. [3] Hebb DO (1949) The Organization of Behavior. Wiley: NY. [4] Stent G (1973) A physiological, mechanism for Hebb's postulate of learning. Proc. Natl. Acad. Sci. USA 70: 997-1001 [5] Barrionuevo G, Schottler F & Lynch G (1980) The effects of repetitive low frequency stimulation on control and "pontentiated" synaptic responses in the hippocampus. Life Sci 27:2385-2391. [6] Thiels E, Xie X, Yeckel MF, Barrionuevo G & Berger TW (1996) NMDA Receptordependent LTD in different subfields of hippocampus in vivo and in vitro. Hippocampus 6:43-51. [7] Sejnowski T J (1977) Storing covariance '!Vith nonlinearly interacting neurons. 1. Math. BioL 4:303-321. [8] Sutton RS (1988) Learning to predict by the methods of temporal difference. Machine Learning. 3:9-44 [9] Sompolinsky H and Kanter I (1986) Temporal association in asymmetric neural networks. Phys.Rev.Letter. 57:2861-2864. [10] Gerstner W, Kempter R, van Hemmen JL & Wagner H (1996) A neuronal learning rule for sub-millisecond temporal coding. Nature 383:76-78. [11] Kempter R, Gerstner W & van Hemmen JL (1999) Spike-based compared to rate-based hebbian learnin g. Kearns , Ms., Solla, S.A and Cohn, D.A. Eds. Advances in Neural Information Processing Systems J J. MIT Press, Cambridge MA. [12] Kempter R, Gerstner W, van Hemmen JL & Wagner H (1996) Temporal coding in the sub-millisecond range: Model of barn owl auditory pathway. Touretzky, D.S, Mozer, M.C, Hasselmo, M.E, Eds. Advances in Neural Information Processing Systems 8. MIT Press, Cambridge MA pp.124-130. [13] Markram H, Lubke J, Frotscher M & Sakmann B (1997) Regulation of synaptic efficacy by coincidence of postsynaptic Aps and EPPSPs. Science 275:213-215. [14] Markram H & Tsodyks MV (1996) Redistribution of synaptic efficacy between neocortical pyramidal neurons. Nature 382:807-810. [15] Zhang L, Tao HW, Holt CE & Poo M (1998) A critical window for cooperation and competition among developing retinotectal synapses. Nature 35:37-44 [16] Abbott LF, & Blum KI (1996) Functional significance of long-term potentiation for sequence learning and prediction. Cerebral Cortex 6: 406-416. [17] Abbott LF, & Song S (1999) Temporally asymmetric hebbian learning, spike timing and neuronal response variability. Kearns, Ms., Solla, S.A and Cohn, D.A. Eds. Advances in Neural Information Processing Systems J J. MIT Press, Cambridge MA. [18] Goldman MS, Nelson SB & Abbott LF (1998) Decorrelation of spike trains by synaptic depression. Neurocomputing (in press). [19] Hopfield J (1982) Neural networks and physical systems with emergent collective computational properties. Proc. Natl. Acad. Sci. USA. 79:2554-2558. . [20] Ackley DH, Hinton GE, Sejnowski TJ (1985) A learning algorithm for Boltzmann machines. Cognitive Science 9:147-169.
|
1999
|
69
|
1,719
|
Broadband Direction-Of-Arrival Estimation Based On Second Order Statistics Justinian Rosca Joseph 6 Ruanaidh Alexander Jourjine Scott Rickard {rosca,oruanaidh,jourjine,rickard}@scr.siemens.com Siemens Corporate Research, Inc. 755 College Rd E Princeton, NJ 08540 Abstract N wideband sources recorded using N closely spaced receivers can feasibly be separated based only on second order statistics when using a physical model of the mixing process. In this case we show that the parameter estimation problem can be essentially reduced to considering directions of arrival and attenuations of each signal. The paper presents two demixing methods operating in the time and frequency domain and experimentally shows that it is always possible to demix signals arriving at different angles. Moreover, one can use spatial cues to solve the channel selection problem and a post-processing Wiener filter to ameliorate the artifacts caused by demixing. 1 Introduction Blind source separation (BSS) is capable of dramatic results when used to separate mixtures of independent signals. The method relies on simultaneous recordings of signals from two or more input sensors and separates the original sources purely on the basis of statistical independence between them. Unfortunately, BSS literature is primarily concerned with the idealistic instantaneous mixing model. In this paper, we formulate a low dimensional and fast solution to the problem of separating two signals from a mixture recorded using two closely spaced receivers. Using a physical model of the mixing process reduces the complexity of the model and allows one to identify and to invert the mixing process using second order statistics only. We describe the theoretical basis of the new approach, and then focus on two algorithms, which were implemented and successfully applied to extensive sets of real-world data. In essence, our separation architecture is a system of adaptive directional receivers designed using the principles ofBSS. The method bears resemblance to methods in beamforming [8] in that it works by spatial filtering. Array processing techniques [2] reduce noise by separating signal space from noise space, which necessitates more receivers than emitters. The main differences are that standard beamforming and array processing techniques [8, 2] are generally strictly concerned with processing directional narrowband signals. The difference with BSS [7, 6] is that our approach is model-based and therefore the elements of the mixing matrix are highly constrained: a feature that aids in the robust and reliable identification of the mixing process. 776 J. Rosca, J. 6 Ruanaidh, A. Jourjine and S. Rickard The layout of the paper is as follows. Sections 2 and 3 describe the theoretical foundation of the separation method that was pursued. Section 4 presents algorithms that were developed and experimental results. Finally we summarize and conclude this work. 2 Theoretical foundation for the BSS solution As a first approximation to the general multi-path model, we use the delay-mixing model. In this model, only direct path signal components are considered. Signal components from one source arrive with a fractional delay between the time of arrivals at two receivers. By fractional delays, we mean that delays between receivers are not generally integer multiples of the sampling period. The delay depends on the position of the source with respect to the receiver axis and the distance between receivers. Our BSS algorithms demix by compensating for the fractional delays. This, in effect, is a form of adaptive beamforming with directional notches being placed in the direction of sources of interference [8]. A more detailed account of the analytical structure of the solutions can be found in [1]. Below we address the case of two inputs and two outputs but there is no reason why the discussion cannot be generalized to multiple inputs and multiple outputs. Assume a linear mixture of two sources, where source amplitude drops off in proportion to distance: 1 R-I 1 R-2 Xi(t) = -SI (t _Z ) + -S2(t _Z ) (1) Ril C Ri2 C j = 1, 2, where c is the speed of wave propagation, and Rij indicates the distance from receiver i to source j. This describes signal propagation through a uniform non-dispersive medium. In the Fourier domain, Equation 1 results in a mixing matrix A( w) given by: A(w) = [~lle-jW~ ~12e-jW~ 1 1 -jw~ 1 _jw!!JJ.. R21e c R 22 e c (2) It is important to note that the columns can be scaled arbitrarily without affecting separation of sources because rescaling is absorbed into the sources. This implies that row scaling in the demixing matrix (the inverse of A( w» is arbitrary. Using the Cosine Rule, Rij can be expressed in terms of the distance Rj of source j to the midpoint between two receivers, the direction of arrival of source j, and the distance between receivers, d, as follows: 1 R;j = [HJ + (~)' + 2(-1)' m Hj COS OJ r (3) Expanding the right term above using the binomial expansion and preserving only zeroth and first order terms, we can express distance from the receivers to the sources as: Rij = ( Rj + 8~j) + (_l)i (~) cosOj (4) This approximation is valid within a 5% relative error when d ::; ~. With the substitution for Rij and with the redefinition of source j to include the delay due to the term within brackets in Equation 4 divided by c, Equation 1 becomes: Xi(t) = ~ ~ij .Sj (t+(-l)i·(:c).cosOj ) ,i= 1,2 (5) J In the Fourier domain, equation 5 results in the simplification to the mixing matrix A( w): [ _1_ e-jwo1 _1_ e-jw02 ] A(w) = R Il · . Rl2 . (6) _1_ eJW01 _1_ ejw02 R21· R 22' Broadband DOA Estimation Based on Second Order Statistics 777 Here phases are functions of the directions of arrival ()j (defined with respect to the midpoint between receivers), the distance between receivers d, and the speed of propagation c: Oi = 2dc cos ()i ,i = 1, 2. Rij are unknown, but we can again redefine sources so diagonal elements are unity: (7) where c), C2 are two positive real numbers. In wireless communications sources are typically distant compared to antenna distance. For distant sources and a well matched pair of receivers c) ~ C2 ~ 1. Equation 7 describes the mixing matrix for the delay model in the frequency domain, in terms of four parameters, 0) ,02, c), C2. The corresponding ideal demixing matrix W(w), for each frequency w, is given by: [ ] _) 1 [e jW02 W(w) = A(w) = detA(w) -c2 .ejwol (8) The outputs, estimating the sources, are: [ z)(w) ] _ W w [X)(W) ] _ 1 [ Z2(W) () X2(W) detA(w) _c)e- jW02 ] [ x)(w) ] e-; WO l X2(W) (9) Making the transition back to the time domain results in the following estimate of the outputs: (10) where @ is convolution, and (11) Formulae 9 and 10 form the basis for two algorithms to be described next, in the time domain and the frequency domains. The algorithms have the role of determining the four unknown parameters. Note that the filter corresponding to H (w, 0) , 02, C), C2) should be applied to the output estimates in order to map back to the original inputs. 3 Delay and attenuation compensation algorithms The estimation of the four unknown parameters 0), 02, C), C2 can be carried out based on second order criteria that impose the constraint that outputs are decorrelated ([9, 4, 6, 5]). 3.1 Time and frequency domain approaches The time domain algorithm is based on the idea of imposing the decorrelation constraint (Z) (t), Z2(t)} = 0 between the estimates ofthe outputs, as a function of the delays D) and D2 and scalar coefficients c) and C2. This is equivalent to the following criterion: (12) where F(.) measures the cross-correlations between the signals given below, representing filtered versions of the differences of fractionally delayed measurements: 778 J Rosca. J 6 Ruanaidh. A. Jourjine and S. Rickard Z)(t) = h(t, D), D2, e), e2) 0 (X)(t + D2) - e)X2(t») Z2(t) = h(t, D) , D2, e) , e2) 0 (e2X) (t + D2) - X2(i») F(D), D2, e), e2) = (Z)(t), Z2(t)} In the frequency domain, the cross-correlation of the inputs is expressed as follows: RX(w) = A(w)Rs(w)AH(w) (13) ( 14) The mixing matrix in the frequency domain has the form given in Equation 7. Inverting this cross correlation equation yields four equations that are written in matrix form as: ( 15) Source orthogonality implies that the off-diagonal terms in the covariance matrix must be zero: RT2(W) =0 Rf)(w) = 0 (16) For far field conditions (i.e. the distance between the receivers is much less than the distance from sources) one obtains the following equations: The terms a = e- jw1h and b = e-jwoz are functions of the time delays. Note that there is a pair of equations of this kind for each frequency. In practice, the unknowns should be estimated from data at all available frequencies to obtain a robust estimate. 3.2 Channel selection Up to this point, there was no guarantee that estimated parameters would ensure source separation in some specific order. We could not decide a priori whether estimated parameters for the first output channel correspond to the first or second source. However, the dependence of the phase delays on the angles of arrival suggests a way to break the permutation symmetry in source estimation, that is to decide precisely which estimate to present on the first channel (and henceforth on the second channel as well). The core idea is that directionality and spatial cues provide the information required to break the symmetry. The criterion we use is to sort sources in order of increasing delay. Note that the correspondence between delays and sources is unique when sources are not symmetrical with respect to the receiver axis. When sources are symmetric there is no way of distinguishing between their positions because the cosine of the angles of arrival, and hence the delay, is invariant to the sign of the angle. 4 Experimental results A robust implementation of criterion 12 averages cross-correlations over a number of windows, of given size. More precisely F is defined as follows: F( 0),02) = L I(Z) (t), Z2(t)W Blocks ( 18) Broadband DOA Estimation Based on Second Order Statistics 779 Normally q = 1 to obtain a robust estimate. Ngo and Bhadkamkar [5] suggest a similar criterion using q = 2 without making use of the determinant of the mixing matrix. After taking into account all terms from Equation 18, including the determinant of the mixing matrix A, we obtain the function to be used for parameter estimation in the frequency domain: ~ 1 I a x b x x 1 x Iq F(01,02) = ~ 2· -bRl1 (W) - -R22(W) - abR21 (w) - -bRI2(w) (19) w { det A} + TJ a a where TJ is a (Wiener Filter-like) constant that helps prevent singularities and q is normally set to one. Computing the separated sources using only time differences leads to highpass filtered outputs. In order to implement exactly the theoretical demixing procedure presented one has to divide by the determinant of the mixing matrix. Obviously one could filter using the inverse of the determinant to obtain optimal results. This can be implemented in the form of a Wiener filter. The Wiener filter requires knowledge both ofthe signal and noise power spectral densities. This information is not available to us but a reasonable approximation is to assume that the (wideband) sources have a flat spectral density and the noise corrupting the mixtures is white. In this case, the Wiener Filter becomes: H w _ ( {detA(W)}2) 1 ( ) { det A (w )} 2 + TJ det A (w ) (20) where the parameter TJ has been empirically set to the variance of the mixture. Applying this choice of filter usually dramatically improves the quality of the separated outputs. The technique of postprocessing using the determinant of the mixing matrix is perfectly general and applies equally well to demixtures computed using matrices of FIR filters. The quality of the result depends primarily on the care with which the inverse filter is implemented. It also depends on the accuracy of the estimate for the mixing parameters. One should avoid using the Wiener filter for near-degenerate mixtures. The proof of concept for the theory outlined above was obtained using speech signals which if anything pose a greater challenge to separation algorithms because of the correlation structure of speech. Two kinds of data are considered in this paper: synthetic direct propagation delay data and synthetic mUlti-path data. Data can be characterized along two dimensions of difficulty: synthetic vs. real-world, and direct path vs. multi-path. Combinations along these dimensions represented the main type of data we used. The value of distance between receivers dictates the order of delays that can appear due to direct path propagation, which is used by the demixing algorithms. Data was generated synthetically employing fractional delays corresponding to the various positions of the sources [3]. We modeled multi-path by taking into account the decay in signal amplitude due to propagation distance as well as the absorption of waves. Only the direct path and one additional path were considered. The algorithms developed proved successful for separation of two voices from direct path mixtures, even where the sources had very similar spectral power characteristics, and for separation of one source for multi-path mixtures. Moreover, outputs were free from artifacts and were obtained with modest computational requirements. Figure 1 presents mean separation results of the first and second channels, which correspond to the first and second sources, for various synthetic data sets. Separation depends on the angles of arrival. Plots show no separation in the degenerate case of equal or closeby angles of arrival, but more than lOdB mean separation in the anechoic case and 5dB in the mUlti-path case. 780 J. Rosca, J. 6 Ruanaidh, A. Jourjine and S. Rickard 50 ,. .. .. f .. i :-..\ I" " I / I \ / .~ .. ,," . 1.. '" i" " f. I ,. I t I~ Anechoic F Doma1 ') I ., =~H AnechoicT~ _ "-1'-1°0 so '00 ,so .... 210 ... .... -6. 50 '00 ,,., ... ... ... .... -"-"50 ,. ,. i .. I ill If .... :-' .... i" i,. I . I 1. j" " : \ f · I ,. I · " , ' =t~Oomal =t~ so '00 'so .... 210 ... .... so '00 '50 ... ... -"-"Figure 1: Two sources were positioned at a relatively large distance from a pair of closely spaced receivers. The first source was always placed at zero degrees whilst the second source was moved uniformly from 30 to 330 degrees in steps of 30 degrees. The above shows mean separation and standard deviation error bars of first and second sources for six synthetic delay mixtures or synthetic mUlti-path data mixtures using the time and frequency domain algorithms. 5 Conclusions The present source separation approach is based on minimization of cross-correlations of the estimated sources, in the time or frequency domains, when using a delay model and explicitly employing dirrection of arrival. The great advantage of this approach is that it reduces source separation to a decorrelation problem, which is theoretically solved by a system of equations. Although the delay model used generates essentially anechoic time delay algorithms, the results of this work show systematic improvements even when the algorithms are applied to real multi-path data. In all cases separation improvement is robust with respect to the power ratios of sources. Acknowledgments We thank Radu Balan and Frans Coetzee for useful discussions and proofreading various versions of this document and our collaborators within Siemens for providing extensive data for testing. Broadband DOA Estimation Based on Second Order Statistics 781 References [1] A. Jourjine, S. Rickard, J. 6 Ruanaidh, and J. Rosca. Demixing of anechoic time delay mixtures using second order statistics. Technical Report SCR-99-TR-657, Siemens Corporate Research, 755 College Road East, Princeton, New Jersey, 1999. [2] Hamid Krim and Mats Viberg. Two decades of array signal processing research. IEEE Signal Processing Magazine, 13(4), 1996. [3] Tim Laakso, Vesa Valimaki, Matti Karjalainen, and Unto Laine. Splitting the unit delay. IEEE Signal Processing Magazine, pages 30-60,1996. [4] L. Molgedey and H.G. Schuster. Separation of a mixture of independent signals using time delayed correlations. Phys.Rev.Lett., 72(23):3634-3637, July 1994. [5] T. J. Ngo and N.A. Bhadkamkar. Adaptive blind separation of audio sources by a physically compact device using second order statistics. In First International Workshop on leA and BSS, pages 257-260, Aussois, France, January 1999. [6] Lucas Parra, Clay Spence, and Bert De Vries. Convolutive blind source separation based on multiple decorrelation. In NNSP98, 1988. [7] K. Torkolla. Blind separation for audio signals: Are we there yet? In First International Workshop on Independent component analysis and blind source separation, pages 239244, Aussois, France, January 1999. [8] V. Van Veen and Kevin M. Buckley. Beamforrning: A versatile approach to spatial filtering. IEEE ASSP Magazine, 5(2), 1988. [9] E. Weinstein, M. Feder, and A. Oppenheim. Multi-channel signal separation by decorrelation. IEEE Trans. on Speech and Audio Processing, 1 (4):405-413, 1993.
|
1999
|
7
|
1,720
|
An Oeulo-Motor System with Multi-Chip Neuromorphie Analog VLSI Control Oliver Landolt* CSEMSA 2007 Neuchatel / Switzerland E-mail: landolt@caltech.edu Steve Gyger CSEMSA 2007 Neuchatel / Switzerland E-mail: steve.gyger@csem.ch Abstract A system emulating the functionality of a moving eye-hence the name oculo-motor system-has been built and successfully tested. It is made of an optical device for shifting the field of view of an image sensor by up to 45 ° in any direction, four neuromorphic analog VLSI circuits implementing an oculo-motor control loop, and some off-the-shelf electronics. The custom integrated circuits communicate with each other primarily by non-arbitrated address-event buses. The system implements the behaviors of saliency-based saccadic exploration, and smooth pursuit of light spots. The duration of saccades ranges from 45 ms to 100 ms, which is comparable to human eye performance. Smooth pursuit operates on light sources moving at up to 50 0 /s in the visual field. 1 INTRODUCTION Inspiration from biology has been recognized as a seminal approach to address some engineering challenges, particularly in the computational domain [1]. Researchers have borrowed architectures, operating principles and even micro-circuits from various biological neural structures and turned them into analog VLSI circuits [2]. Neuromorphic approaches are often considered to be particularly suited for machine vision, because even simple animals are fitted with neural systems that can easily outperform most sequential digital computers in visual processing tasks. It has long been recognized that the level of visual processing capability needed for practical applications would require more circuit area than can be fitted on a single chip. This observation has triggered the development of inter-chip communication schemes suitable for neuromorphic analog VLSI circuits [3]-[4], enabling the combination of several chips into a system capable of addressing tasks of higher complexity. Despite the availability of these communication protocols, only few successful implementations of multi-chip neuromorphic systems have been reported so far (see [5] for a review). The present contribution reports the completion of a fully functional multi-chip system emulating the functionality of a moving eye, hence the denomination oculo-motor system. It is made of two 2D VLSI retina chips, two custom analog VLSI control chips, dedicated optical and mechanical devices and off-the-shelf electronic components. The four neuromorphic chips communicate mostly by pulse streams mediated by non-arbitrated address-event buses [4]. In its current version, the system can generate sacca des (quick eye • Now with Koch Lab, Division of Biology 139-74, Caltech, Pasadena, CA 91125, USA An Oculo-Motor System with Multi-Chip Neuromorphic Analog VLS! Control 711 movements) toward salient points of the visual scene, and track moving light spots. The purpose of the saccadic operating mode is to explore the visual scene efficiently by allocating processing time proportionally to significance. The purpose of tracking (also called smooth pursuit) is to slow down or suppress the retina image slip of moving objects in order to leave visual circuitry more time for processing. The two modes-saccadic exploration and smooth pursuit--operate concurrently and interact with each other. The development of this oculo-motor system was meant as a framework in which some general issues pertinent to neuromorphic engineering could be addressed. In this respect, it complements Horiuchi's pioneering work [6]-[7], which consisted of developing a 10 model of the primate oculo-motor system with a focus on automatic on-chip learning of the correct control function. The new system addresses different issues, notably 20 operation and the problem of strongly non-linear mapping between 20 visual and motor spaces. 2 SYSTEM DESCRIPTION The oculo-motor system is made of three modules (Fig. O. The moving eye module contains a 35 by 35 pixels electronic retina [8] fitted with a light deflection device driven by two motors. This device can shift the field of view of the retina by up to 45 0 in any direction. The optics are designed to cover only a narrow field of view of about 120. Thereby, the retina serves as a high-resolution "spotlight" gathering details of interesting areas of the visual scene, similarly to the fovea of animals. Two position control loops implemented by off-the-shelf components keep the optical elements in the position specified by input signals applied to this module. The other modules control the moving eye in two types of behavior, namely saccadic exploration and smooth pursuit. They are implemented as physically distinct printed circuit boards which can be enabled or disabled independently. s;::. u o· ;.0 *=: co ~. ~ .sa: '" 0.' '" ><. <I); i· . -6 .<;::: ; 8 "'. ~: e ",. tf.l 0..: c '" o ~ .~ 5 is a ii ~ .c g . . 0 ._. saccadic control chip wide-angle retina saliency distribution motors & narrowangle retina current prism orientations ~lEm3l§Jru incremen tal control chip spot position spot 14-.c...::.c:..---Ilocation (EPROM) retina image Figure 1: Oculo-motor system architecture scene The light deflection device is made of two transparent and flat disks with a micro-prism grating on one side, mounted perpendicularly to the optical axis of a lens. Each disk can rotate without restriction around this axis, independently from the other. As a whole, each micro-prism grating acts on light essentially like a single large prism, except that it takes much less space (Fig. 2). Although a single fixed prism cannot have an adjustable deflection angle, with two mobile prisms, any magnitude and direction of deflection within some boundary can be selected, because the two contributions may combine either con712 0. Landolt and S. Gyger structively or destructively depending on the relative prism orientations. The relationship between prism orientations and deflection angle has been derived in [9]. The advantage of this system over many other designs is that only two small passive optical elements have to move whereas most of the components are fixed, which enables fast movements and avoids electrical connections to moving parts. The drawback of this principle is that optical aberrations introduced by the prisms degrade image quality. However, when the device is used in conjunction with a typical electronic retina, this degradation is not limiting because these image sensors are characterized by a modest resolution due to focal-plane electronic processing. c. retina B. iJ~ = ~~ mic~o-prism gratmgs ~ ¢Figure 2: A. Light deflection device principle. B. Replacement of conventional prisms by micro-prism gratings. C. Photograph of the prototype with motors and orientation sensors. The saccadic exploration module (Fig. 1) consists of an additional retina fitted with a fixed wide-angle lens, and a neuromorphic saccadic control chip. The retina gathers lowresolution information from the whole visual scene accessible to the moving eye, determines the degree of interest--{)r saliency [l0]--{)f every region and transmits the resulting saliency distribution to the saccadic control chip. In the current version of the system, the distribution of saliency is just the raw output image of the retina, whereby saliency is determined by the brightness of visual scene locations. By inserting additional visual processing hardware between the retina and the saccadic control chip, it would be possible to generate interest for more sophisticated cues like edges, motion or specific shapes or patterns. The saccadic control chip (Fig. 3) determines the sequence and timing of an endless succession of quick jumps--{)r saccades-to be executed by the moving eye, in such a way that salient locations are attended longer and more frequently than less significant locations. The chip contains a 2D array of about 900 cells, which is called visual map because its organization matches the topology of the visual field accessible by the moving eye. The chip also contains two 1D arrays of 64 cells called motor maps, which encode micro-prism orientations in the light deflection device. Each cell of the visual map is externally stimulated by a stream of brief pulses, the frequency of which encodes saliency. The cells integrate incoming pulses over time on a capacitor, thereby building up an internal voltage at a rate proportional to pulse frequency. A global comparison circuit--called winnertake-all-selects the cell with the highest internal voltage. In the winning cell, a leakage mechanism slowly decrease the internal voltage over time, thereby eventually leading another cell to win. With this principle, any cell stimulated to some degree wins from time to time. The frequency of winning and the time ellapsed until another cell wins increases with saliency. The visual map and the two motor maps are interconnected by a so-called network of links [9], which embodies the mapping between visual and motor spaces. This network consists of a pair of wires running from each visual cell to one cell in each of the two motor maps. Thereby, the winning cell in the visual map stimulates exactly one cell in An Oculo-Motor System with Multi-Chip Neuromorphic Analog VLS! Control 713 each motor map. The location of the active cell in a motor map encodes the orientation of a micro-prism grating, therefore this representation convention is called place coding [9]. The addresses of the active cells on the motor maps are transmitted to the moving eye, which triggers micro-prism displacements toward the specified orientations. visual map motor maps ~==~~J5t2~5C~~~~:s; ~.r~~ r--~ :&..-=;;;-~-m"H 8 1--_.../ saliency distribution t> ...---...... 1"8 ~ (adress"0 event) Figure 3: Schematic of the saccadic control chip s::: Q) prism orientations (addressevent) The smooth pursuit module consists of an EPROM chip and a neuromorphic incremental control chip (Fig. 1). The address-event stream delivered by the narrow-field retina is applied to the EPROM. The field of view of this retina has been divided up into eight angular sectors and a center region (Fig. 4A). The EPROM maps the addresses of pixels located in the same sector onto a common output address, thereby summing their spiking frequencies. The resulting address-event stream is applied to a topological map of eight cells constituting one of the inputs of the neuromorphic incremental control chip. If a single bright spot is focused on the retina away from the center, a large sum is produced in one or two neighboring cells of this map, whereas the other cells receive only background stimulation levels close to zero. Thereby, the angUlar position of the light spot is encoded by the location of the spot of activity on the map-in other words place coding. Other objects than light spots could be processed similarly after insertion of relevant detection hardware between the retina and the EPROM. The incremental control chip has two additional input maps representing the current orientations of the two prisms (Fig. 4B). These maps are connected to position sensors incorporated into the moving eye module (Fig. 1). These additional inputs are necessary because the control actions depends not only on the location of the target on the retina, but also on the current prism orientations [9]. The control actions are computed by three networks of links relating the primary inputs maps to the final output map via an intermediate layer. The purpose of this intermediate stage is to break down the control function of three variables into three functions of only two variables, which can be implemented by a lower number of links [11]. As in the saccadic control chip, the mapping between the input and output spaces has been calculated numerically prior to chip fabrication, then hardwired as electrical connections. The final outputs of the chip are pulse streams encoding the direction and rate at which each micro-prism grating must rotate in order to shift the target toward the center of the retina. These pulses incrementally update prism orientations settings at the input of the moving eye module (Fig. O. Since two different modules control the same moving eye, it is necessary to coordinate them in order to avoid conflicts. Saccadic module interventions occur whenever a saccade is generated, namely every 200-500 ms in typical operating conditions. At the instant a saccade is requested, the smooth pursuit module is shut off in order to prevent it from reacting against the saccade. A similar mechanism called saccadic suppression exists in biology. When the eye reaches the target location, control is left entirely to the smooth pursuit module until the next saccade is generated. Reciprocally, if an object tracked by 714 A. 0% 0% 0% 80% 0% 20% 0% 0% B. spot location current prism positions input maps 0. Landolt and S. Gyger output maps increments networks of links Figure 4: A. Place-coded spot location obtained by summing the outputs of pixels belonging to the same sector. B. Architecture of the incremental control chip the smooth pursuit module reaches the boundary of the global visual field, the incremental control chip sends a signal triggering a saccade back toward the center of the visual fieldwhich is called nystagmus in biology. The reason for splitting control into two modules is that vi suo-motor coordinate mappings are very different for saccadic exploration and for smooth pursuit [9]. In the former case, visual input is related to the global field of view covered by the fixed wide-angle retina, and outputs are absolute micro-prism orientations. Saccade targets need not be initially visible to the moving eye. Since saccades are executed without permanent visual feedback, their accuracy is limited by the mapping hardwired in the control chip. Inversely, smooth pursuit is based on information extracted directly from the retina image of the moving eye. The output of the incremental control chip are small changes in micro-prism orientations instead of absolute positions. Thereby, the smooth pursuit module operates under closed-loop visual feedback, which confers it high accuracy. However, operation under visual feedback is slower than open-loop saccadic movements, and smooth pursuit inherently applies only to a single target. Thus, the two control modules are very complementary in purpose and performance. 3 EXPERIMENTAL RESULTS The present section reports both qualitative observations and quantitative measurements made on the oculo-motor system, because the complexity of its behavior is difficult to convey by just a few numbers. The measurement setup consisted of a black board on which high efficiency white light emitting diodes were mounted, the intensity of which could be set individually. The visual scene was placed about 70 cm away from the moving eye. The axes of the two retinas were parallel at a distance of 6.5 cm. It was necessary to take this spacing into account for the visuo-motor coordinate mapping. The saliency distribution produced by the visual scene was measured by analyzing the output image of the wide-angle retina chip (Fig. 1). When a single torchlight was waved in front of the moving eye, it was found that the smooth pursuit system indeed keeps the center of gravity of the light source image at the center of the narrow field of view. The maximum tracking velocity depends on the intensity ratio-contrast-between the light spot and the background. This behavior was expected because by construction, the incremental control chip generates correction pulses at a rate proportional to the magnitude of its input signals. At the highest contrast, we were able to achieve a maximum tracking speed of 50 ° Is. For comparison, smooth pursuit in humans can in principle reach up to 180 0 /s, but tracking is accurate only up to about 30 0 /s [7]. When shown two fixed light spots, the moving eye jumps from one to the other periodically. An Oculo-Motor System with Multi-Chip Neuromorphic Analog VLSI Control 715 The relative time spent on each light source depends on their intensity ratio. The duty cycle has been measured for ratios ranging from 0.1 to 10 (Fig. SA). It is close to SO% for equal saliency, and tends toward a ratio of 10 to 1 in favor of the brightest spot at the extremities of the range. The delay between onset of a saccade and stabilization on the target ranges from 4S ms to 100 ms. The delay is not constant because it depends to some extent on saccade magnitude, and because of occasional mechanical slipping at the onset. In humans, the duration of saccades tends to be proportional to their amplitude, and ranges between 2S ms and 200ms. A. 100 ~ 80 40 20 o 0.1 • saccades duty cycle I • I ! • I J i i I saliency ratio i I 10 B. ~ 60 ~ 50 E 40 :; 30 § 20 ~1O ~ 0 .8 0.1 background observation time . • 10 100 spot intensity I total background intensity [%] Figure S: Measured data plots. A. Gaze time sharing between two salient spots versus saliency ratio. B. Gaze time on background versus spot-to-background intensity ratio. When more than two spots are turned on, the saccadic exploration is not obviously periodic anymore, but the eye keeps spending most time on the light spots, with a noticeable preference for larger intensities. This behavior is consistent with measurements previously made on the saccadic control chip alone under electrical stimulation [9]. Saccades towards locations in the background are rare and brief if the intensity ratio between the light sources and the background is high enough. This phenomenon has been studied quantitatively by measuring the fraction of time spent on background locations for different light source intensities (Fig. SB). The quantity on the horizontal axis of the plot is the ratio between the total intensity in light spots and the total background intensity. These two quantities are measured by summing the outputs of wide-angle retina pixels belonging to the light spot images and to the background respectively. It can be seen that if this ratio is above 1, less than 10% of the time is spent scanning the background. Open-loop saccade accuracy has been evaluated by switching off the smooth pursuit module, and measuring the error vector between the center of gravity of the light spot and the center of the narrow-field retina after each saccade, for six different light spots spread over the field of view. The error vectors were found to be always less than 2 0 in magnitude, with different orientations in each case. Whenever the moving eye returned to a same light spot, the error vector was the same. This shows that the residual error is not due to random noise, but to the limited accuracy of visuo-motor mapping within the saccadic control chip. The magnitude of the error is always low enough that the target light spot is completely visible by the moving eye, thereby ensuring that the smooth pursuit module can indeed correct the error when enabled. 4 CONCLUSION The oculo-motor system described herein performs as intended, thereby demonstrating the value of a neuromorphic engineering approach in the case of a relatively complex task involving mechanical and optical components. This system provides an experimental platform for studying active vision, whereby a visual system acts on itself in order to facilitate perception of its surroundings. Besides saccadic exploration and smooth pursuit, a mov716 0. Landolt and S. Gyger ing eye can be exploited to improve vision in many other ways. For instance, resolution shortcomings in retinas incorporating only a modest number of pixels can be overcome by continuously sweeping the field of view back and forth, thereby providing continuous information in space-although not simultaneously in time. In binocular vision, 3D information perception by stereopsis is also made easier if the fields of view can be aligned by vergence control [12]. Besides active vision, the oculo-motor system also lends itself as a framework for testing and demonstrating other analog VLSI vision circuits. As already mentioned, due to its modular architecture, it is possible to insert additional visual processing chips either in the saccadic exploration module, or in the smooth pursuit module, in order to make the current light-source oriented system suitable for operation in natural visual environments. Acknowledgments The authors wish to express their gratitude to all their colleagues at CSEM who contributed to this work. Special thanks are due to Patrick Debergh for the micro-prism light deflection concept, to Friedrich Heitger for designing and building the mechanical device, and to Edoardo Franzi for designing and building the related electronic interface. Thanks are also due to Arnaud Tisserand, Friedrich Heitger, Eric Vittoz, Reid Harrison, Theron Stanford, and Edoardo Franzi for helpful comments on the manuscript. Mr. Roland Lagger, from Portescap, La Chaux-de-Fonds, Switzerland, provided friendly assistance in a critical mechanical assembly step. References [1] C. Mead. Analog VLSI and Neural Systems. Addison Wesley, 1989. [2] TS. Lande, editor. Neuromorphic Systems Engineering. Kluwer Academic Publishers, Dordrecht, 1998. [3] K. Boahen. Retinomorphic vision systems II: Communication channel design. In IEEE Int. Symp. Circuits and Systems (ISCAS'96), Atlanta, May 1996. [4] A. Mortara, E. Vittoz, and P. Venier. A communication scheme for analog VLSI perceptive systems. IEEE Journal oj Solid-State Circuits, 30, June 1995. [5] C.M. Higgins. Multi-chip neuromorphic motion processing. In Conference on Advanced Research in VLSI, Atlanta, March 1999. [6] TK. Horiuchi, B. Bishotberger, and C. Koch. An analog VLSI saccadic eye movement system. In Advances in Neural Processing Systems 6, 1994. [7] TK. Horiuchi. Analog VLSI-Based, Neuromorphic Sensorimotor Systems: Modeling the Primate Oculomotor System. PhD thesis, Caltech, Pasadena, 1997. [8] P. Venier. A constrast sensitive silicon retina based on conductance modulation in a diffusion network. In 6th Int. Conf Microelectronics Jor Neural Networks and Fuzzy Systems (MicroNeuro'97), Dresden, Sept 1997. [9] O. Landolt. Place Coding in Analog VLSI - A Neuromorphic Approach to Computation. Kluwer Academic Publishers, Dordrecht, 1998. [10] TG. Morris and S.P DeWeerth. Analog VLSI excitatory feedback circuits for attentional shifts and tracking. Analog Integrated Circuits and Signal Processing, 13, May-June 1997. [11] O. Landolt. Place coding in analog VLSI and its application to the control of a light deflection system. In MicroNeuro'97, Dresden, Sept 1997. [12] M. Mahowald. An Analog VLSI SystemJorStereoscopic Vlsion. Kluwer Academic Publishers, Boston, 1994.
|
1999
|
70
|
1,721
|
Understanding stepwise generalization of Support Vector Machines: a toy model Sebastian Risau-Gusman and Mirta B. Gordon DRFMCjSPSMS CEA Grenoble, 17 avo des Martyrs 38054 Grenoble cedex 09, France Abstract In this article we study the effects of introducing structure in the input distribution of the data to be learnt by a simple perceptron. We determine the learning curves within the framework of Statistical Mechanics. Stepwise generalization occurs as a function of the number of examples when the distribution of patterns is highly anisotropic. Although extremely simple, the model seems to capture the relevant features of a class of Support Vector Machines which was recently shown to present this behavior. 1 Introduction A new approach to learning has recently been proposed as an alternative to feedforward neural networks: the Support Vector Machines (SVM) [1]. Instead of trying to learn a non linear mapping between the input patterns and internal representations, like in multilayered perceptrons, the SVMs choose a priori a non-linear kernel that transforms the input space into a high dimensional feature space. In binary classification tasks like those considered in the present paper, the SVMs look for linear separation with optimal margin in feature space. The main advantage of SVMs is that learning becomes a convex optimization problem. The difficulties of having many local minima that hinder the process of training multilayered neural networks is thus avoided. One of the questions raised by this approach is why SVMs do not overfit the data in spite of the extremely large dimensions of the feature spaces considered. Two recent theoretical papers [2, 3] studied a family of SVMs with the tools of Statistical Mechanics, predicting typical properties in the limit of large dimensional spaces. Both papers considered mappings generated by polynomial kernels, and more specifically quadratic ones. In these, the input vectors x E RN are transformed to N(N + I)j2-dimensional feature vectors <I>(x). More precisely, the mapping <I> I (x) = (x, XIX, X2X, ... ,XkX) has been studied in [3] as a function of k, the number of quadratic features, and <I> 2 (x) = (x,xlxjN,X2xjN,··· ,xNxjN) has been considered in [2], leading to different results. These mappings are particular cases of quadratic kernels. In particular, in the case of learning quadratically separable tasks with mapping <I> 2 , the generalization error decreases up to a lower bound for a number of examples proportional to N, followed by a further decrease if the number of examples increases proportionally to the dimension of the feature 322 S. Risau-Gusman and M. B. Gordon space, i.e. to N 2 • In fact, this behavior is not specific of the SVMs. It also arises in the typical case of Gibbs learning (defined below) in quadratic feature spaces [4]: on increasing the training set size, the quadratic components of the discriminating surface are learnt after the linear ones. In the case of learning linearly separable tasks in quadratic feature spaces, the effect of overfitting is harmless, as it only slows down the decrease of the generalization error with the training set size. In the case of mapping <PI, overfitting is dramatic, as the generalization error at any given training set size increases with the number k of features. The aim of the present paper is to understand the influence of the mapping scalingfactor on the generalization performance of the SVMs. To this end, it is worth to remark that features <P2 may be obtained by compressing the quadratic subspace of <PI by a fixed factor. In order to mimic this contraction, we consider a linearly separable task in which the input patterns have a highly anisotropic distribution, so that the variance in one subspace is much smaller than in the orthogonal directions. We show that in this simple toy model, the generalization error as a function of the training set size exhibits a cross-over between two different behaviors: a rapid decrease corresponding to learning the components in the uncompressed space, followed by a slow improvement in which mainly the components in the compressed space are learnt. The latter would correspond, in this highly stylized model, to learning the scaled quadratic features in the SVM with mapping <P2. The paper is organized as follows: after a short presentation of the model, we describe the main steps of the Statistical Mechanics calculation. The order parameters caracterizing the properties of the learning process are defined, and their evolution as a function of the training set size is analyzed. The two regimes of the generalization error are described, and we determine the training set size per input dimension at the crossover, as a function of the pertinent parameters. Finally we discuss our results, and their relevance to the understanding of the generalization properties of SVMs. 2 The model We consider the problem of learning a binary classification task from examples. The training data set Va contains P = aN N-dimensional patterns (eJ.', rJ.') (p = 1,···, P) where rJ.' = sign(e . w .. ) is given by a teacher of weights w" = (WI, W2, ...• , wn ). Without any loss of generality we consider normalized teachers: w" . w" = N. We assume that the components ~i' (i = 1,···, N) of the input patterns e are independent, identically distributed random variables drawn from a zero-mean gaussian distribution, with variance a along Nc directions and unit variance in the Nu remaining ones (Nc + Nu = N): 1 (~; ) 1 (~;) p(e) = IT 2 exp -20'2 IT -exp -2 . iENe V27rO' iENu V2ir (1) We take a < 1 without any loss of generality, as the case a > 1 may be deduced from the former through a straightforward rescaling of Nc and N u. Hereafter, the subspace of dimension Nc and variance a will be called compressed subspace. The corresponding orthogonal subspace, of dimension Nu = N - N c, will be called uncompressed subspace. We study the typical generalization error of a student perceptron learning the classification task, using the tools of Statistical Mechanics. The pertinent cost function Understanding Stepwise Generalization ojSVM's: a Toy Model 323 is the number of misclassified patterns: p E(w; Va) = L 8( -TIL ~IL • w), (2) 1L=1 The weight vectors in version space correspond to a vanishing cost (2). Choosing a w at random from the a posteriori distribution P(wIVa) = Z-l PO(w) exp (-,8E(w; Va», (3) in the limit of ,8 -+ 00 is called Gibbs' learning. In eq. (3),,8 is equivalent to an inverse temperature in the Statistical Mechanics formulation, the cost (2) being the energy function. We assume that Po, the a priori distribution of the weights, is uniform on the hypersphere of radius VN: Po(w) = (21re)-N/2 8(w . w - N). (4) The normalization constant (21re)N/2 is the leading order term of the hypersphere's surface in N-dimensional space. Z is the partition function ensuring the correct normalization of P(wIVa): Z(,8; Va) = J dw Po(w) exp (-,8E(w; Va» . (5) In general, the properties of the student are related to those of the free energy F(,8; Va) = -In Z(,8; V a)/,8. In the limit N -+ 00 with the training set size per input dimension Q: == P / N constant, the properties of the student weights become independant of the particular training set Va. They are deduced from the averaged free energy per degree of freedom, calculated using the replica trick: (6) where the overline represents the average over Va, composed of patterns selected according to (1). In the case of Gibbs learning, the typical behavior of any intensive quantity is obtained in the zero temperature limit ,8 -+ 00. In this limit, only errorfree solutions, with vanishing cost, have non-vanishing posterior probability (3). Thus, Gibbs learning corresponds to picking at random a student in version space, i.e. a vector w that classifies correctly the training set Va, with a probability proportional to Po (w ). In the case of an isotropic pattern distribution, which corresponds to (7 = 1 in (1), the properties of cost function (2) have been extensively studied [5]. The case of patterns drawn from two gaussian clusters in which the symmetry axis of the clusters is the same [6] and different [7] from the teacher's axis, have recently been addressed. Here we consider the problem where, instead of having a single direction along which the patterns' distribution is contracted (or expanded), there is a finite fraction of compressed dimensions. In this case, all the properties of the student's perceptron may be expressed in terms of the following order parameters, that have to satisfy corresponding extremum conditions of the free energy: -ab 1 qc (N L WiaWib) (7) iENc -ab 1 qu (N L WiaWib) (8) iENu 324 S. Risau-Gusman and M B. Gordon R a = (~ L wiawi) (9) c iEN< R a u (~ L WiaWn iENu (10) Qa (~ L (Wia)2) iEN< (11) where ( ... ) indicates the average over the posterior (3); a, b are replica indices, and the subcripts c and u stand for compressed and uncompressed respectively. Notice that we do not impose that Qa, the typical squared norm of the student's components in the compressed subspace, be equal to the corresponding teacher's norm Q* = LiEN«wi)2 IN. 3 Order parameters and learning curves Assuming that the order parameters are invariant under permutation of replicas, we can drop the replica indices in equations (7) to (11). We expect that this hypothesis of replica symmetry is consistent, like it is in other cases of perceptrons learning realizable tasks. The problem is thus reduced to the determination of five order parameters. Their meaning becomes clearer if we consider the following combinations: qc iic Q' (12) iiu (13) qu l-Q' Rc Rc (14) .JCJ..ftJ*' Ru Ru (15) v'1=QJl - Q* ' Q = (~ L (Wi)2). iEN< (16) qc and qu are the typical overlaps between the components of two student vectors in the compressed and the uncompressed subspaces respectively. Similarly, Rc and Ru are the corresponding overlaps between a typical student and the teacher. In terms of this set of parameters, the typical generalization error is €g = (1 I 7r) arccos R with R = (72 RcJQQ* + R uJ(1 - Q)(1 - Q*). J(72Q + (1- Q)J(72Q* + (1 - Q*) (17) Given a, the general solution to the extremum conditions depends on the three parameters of the problem, namely (7, Q* and nc == NcIN. An interesting case is the one where the teacher's anisotropy is consistent with that of the pattern's distribution, i.e. Q* = nco In this case, it easy to show that Q = Q*, qc = Rc and qu = Ru. Thus, , R = nuRu + (72 ncRc , nu + (72nc where nu == NuIN, Ru and Rc are given by the following equations: Rc (72 a J exp (_Rt2 12) 1 - Rc (72nc + nu 7r Jl - R Vt H(tVR) , (18) (19) Understanding Stepwise Generalization ofSVM's: a Toy Model 1,0 0,8 0,6 0,4 0,2 0,0 n =0.9 R c * U .................... ... ........... . . .. ............................ . . .. ... . (12=0.01 ••. , .. •. ______ - - - - - - - - , , , , R " , , ,'''R " G , ------R .......... . .... ~ , .... , .... , ...... ...... • • ~' • • ' 0,5' .. -.. ~-~-~-~--, .' E 0,4 ~~ 0,3 0.2 E 9 0.1 ------------------ --------.-• E G ....... 9 ...... + 0,0 0~,0--~0 ,2~~0~,4--~0,76~0~,8~~1.0 , , . , .' f· • o E 9 .... .... 2 .................... ....................... _- ...... 4 a 6 8 10 325 Figure 1: Order parameters and generalization error for the case Q* = nc = 0.9, (72 = 10- 2 • The curves for the case of spherically distributed patterns is shown for comparison. The inset shows the first step of learning and its plateau (see text). 2 Ru. (7 1- Ru. (20) where Vt = dte- t2 / 2 /~ and H(x) = I~oo Vt. If (72 = 1, we recover the equations corresponding to Gibbs learning of isotropic pattern distributions [;)]. The order parameters are represented as a function of a on figure 1, for a particular choice of nc and (7 . Ru. grows much faster than Rc, meaning that it is easier to learn the components of the uncompressed space. As a result, R (and therefore the generalization error 109) presents a cross-over between two behaviors. At small a, both Ru. « 1 and Rc « 1, so that R(a, (72) = Ra(a(nu. +(74nc)/(nu. +(72nc)2) where Ra is the overlap for Gibbs learning with an isotropic (72 = 1) distribution [5]. Learning the anisotropic distribution is faster (in a) than learning the isotropic one. If (7 « 1 the anisotropy is very large and R increases like Ra but with an effective training set size per input dimension,...., ainu. > a. On increasing a, there is an intermediate regime in which Ru. increases but Rc « 1, so that R ::: Ru.nu./(nu. +(72nc). The corresponding generalization error seems to reach a plateau corresponding to Ru. = 1 and Rc = O. At a » 1, R(a, (72) ::: Ra(a), the asymptotic behavior is independent of the details of the distribution, like in [7]. The crossover between these two regimes, when (72 « 1, occurs at ao ~ J2(nu. + (72nc)/(72nc ). The cases Q* = 1 and Q* = 0 are also of interest. Q* = 1 corresponds to a teacher having all the weights components in the compressed subspace, whereas Q* = 0 326 £ 9 0,5 0,4 0,3 0,2 0,1 I i \ i I ~ Gibbs , ,. ,I " ,", \\, , ' . Q'=0.9 I "" ./ , ..... _, " ......... ..... -..... S. Risau-Gusman and M. B. Gordon 0.025 j ...... . Q·=O.O 0,000 a 20 40 60 80 100 O,O~ ____ ~ ____ ~ __ ~ ____ ~ ____ ~ ____ ~ __ ~ ____ ~ ____ ~ __ ~ o 2 4 6 8 Figure 2: Generalization errors as a function of a for different teachers (Q* = 1, Q* = 0.9 and Q* = 1), for the case nc = 0.9 and (J'2 = 10-2 . The curve for spherically distributed patterns [5] is included for comparison. The inset shows the large alpha behaviors. corresponds to a teacher orthogonal to the compressed subspace, i.e. with all the components in the uncompressed subspace. They correspond respectively to tasks where either the uncompressed or the compressed components are irrelevant for the patterns' classification. In Figure 2 we show all the generalization error curves, including the generalization error EgG for a uniform distribution [5] for comparison. The behaviour of Eg(a) is very sensitive to the value of Q*. If Q* = 1, the teacher is in the compressed subspace where learning is difficult. Consequently, Eg(a) > EgG (a) as expected. On the contrary, for Q* = 0, only the components in the uncompressed space are relevant for the classification task. In this subspace learning is easy and Eg(a) < EgG(a). At Q* f. 0,1 there is a crossover between these regimes, as already discussed. All the curves merge in the asymptotic regime a -+ 00, as may be seen in the inset of Figure 2. 4 Discussion We analyzed the typical learning behavior of a toy perceptron model that allows to clarify some aspects of generalization in high dimensional feature spaces. In particular, it captures an element essential to obtain stepwise learning, which is shown to stem from the compression of high order features. The components in the compressed space are more difficult to learn than those not compressed. Thus, if Understanding Stepwise Generalization ojSVM's: a Toy Model 327 the training set is not large enough, mainly the latter are learnt. Our results allow to understand the importance of the scaling of high order features in the SVMs kernels. In fact, with SVMs one has to choose a priori the kernel that maps the input space to the feature space. If high order features are conveniently compressed, hierarchical learning occurs. That is, low order features are learnt first; higher order features are only learnt if the training set is large enough. In the cases where the higher order features are irrelevant, it is likely that they will not hinder the learning process. This interesting behavior allows to avoid overfitting. Computer simulations currently in progress, of SVMs generated by quadratic kernels with and without the 1/ N scaling, show a behavior consistent with the theoretical predictions [2, 3]. These may be understood with the present toy model. References [1] V. Vapnik (1995) The nature of statistical learning theory. Springer Verlag, New York. [2] R. Dietrich, M. Opper, and H. Sompolinsky (1999) Statistical Mechanics of Support Vector Networks. Phys. Rev. Lett. 82, 2975-2978. [3] A. Buhot and M. B. Gordon (1999) Statistical mechanics of support vector machines. ESANN'99-European Symposium on Artificial Neural Networks Proceedings, Michel Verleysen ed. 201-206; A. Buhot and M. B. Gordon (1998) Learning properties of support vector machines. Cond-Mat/9802179. [4] H. Yoon and J.-H. Oh (1998) Learning of higher order perceptrons with tunable complexities J. Phys. A: Math. Gen. 31, 7771-7784. [5] G. Gyorgyi and N. Tishby (1990) Statistical Theory of Learning a Rule. In Neural Networks and Spin Glasses (W. K. Theumann and R. Koberle, Worls Scientific), 3-36. [6] R. Meir (1995) Empirical risk minimizaton. A case study. Neural Compo 7, 144-157. [7] C. Marangi, M. Biehl, S. A. Solla (1995) Supervised Learning from Clustered Examples Europhys. Lett. 30 (2), 117-122.
|
1999
|
71
|
1,722
|
A Geometric Interpretation of v-SVM Classifiers David J. Crisp Centre for Sensor Signal and Information Processing, Deptartment of Electrical Engineering, University of Adelaide, South Australia dcrisp@eleceng.adelaide.edu.au Abstract Christopher J.C. Burges Advanced Technologies, Bell Laboratories, Lucent Technologies Holmdel, New Jersey burges@lucent.com We show that the recently proposed variant of the Support Vector machine (SVM) algorithm, known as v-SVM, can be interpreted as a maximal separation between subsets of the convex hulls of the data, which we call soft convex hulls. The soft convex hulls are controlled by choice of the parameter v. If the intersection of the convex hulls is empty, the hyperplane is positioned halfway between them such that the distance between convex hulls, measured along the normal, is maximized; and if it is not, the hyperplane's normal is similarly determined by the soft convex hulls, but its position (perpendicular distance from the origin) is adjusted to minimize the error sum. The proposed geometric interpretation of v-SVM also leads to necessary and sufficient conditions for the existence of a choice of v for which the v-SVM solution is nontrivial. 1 Introduction Recently, SchOlkopf et al. [I) introduced a new class of SVM algorithms, called v-SVM, for both regression estimation and pattern recognition. The basic idea is to remove the user-chosen error penalty factor C that appears in SVM algorithms by introducing a new variable p which, in the pattern recognition case, adds another degree of freedom to the margin. For a given normal to the separating hyperplane, the size of the margin increases linearly with p. It turns out that by adding p to the primal objective function with coefficient -v, v 2: 0, the variable C can be absorbed, and the behaviour of the resulting SVM - the number of margin errors and number of support vectors - can to some extent be controlled by setting v. Moreover, the decision function produced by v-SVM can also be produced by the original SVM algorithm with a suitable choice of C. In this paper we show that v-SVM, for the pattern recognition case, has a clear geometric interpretation, which also leads to necessary and sufficient conditions for the existence of a nontrivial solution to the v-SVM problem. All our considerations apply to feature space, after the mapping of the data induced by some kernel. We adopt the usual notation: w is the normal to the separating hyperplane, the mapped A Geometric Interpretation ofv-SVM Classifiers 245 data is denoted by Xi E !RN , i = 1, ... ,1, with corresponding labels Yi E {±1}, b, p are scalars, and ~i' i = 1", ,,1 are positive scalar slack variables. 2 v-SVM Classifiers The v-SVM formulation, as given in [1], is as follows: minimize 1 1 pI = 211w/112 - Vp' + y l:~~ i (1) with respect to w', b', p', ~i, subject to: Yi(W' . Xi + b/) ~ p' ~~, ~i ~ 0, p' ~ o. (2) Here v is a user-chosen parameter between 0 and 1. The decision function (whose sign determines the label given to a test point x) is then: l' (x) = w' . x + b'. (3) The Wolfe dual of this problem is: maximize Ph = -~ 2:ij OiOjYiYjXi . Xj subject to (4) with w' given by w' = 2:i 0iYiXi . SchOlkopf et al. [1] show that v is an upper bound on the fraction of margin errors1, a lower bound on the fraction of support vectors, and that both of these quantities approach v asymptotically. Note that the point w' = b' = p = ~i = 0 is feasible, and that at this point, pI = O. Thus any solution of interest must have pI ::; O. Furthermore, if Vp' = 0, the optimal solution is at w' = b' = p = ~i = 02 • Thus we can assume that v p' > 0 (and therefore v > 0) always. Given this, the constraint p' ~ 0 is in fact redundant: a negative value of p' cannot appear in a solution (to the problem with this constraint removed) since the above (feasible) solution (with p' = 0) gives a lower value for P'. Thus below we replace the constraints (2) by (5) 2.1 A Reparameterization of v-SVM We reparameterize the primal problem by dividing the objective function pI by v2/2, the constraints (5) by v, and by making the following substitutions: 2 w' b' p' ~i I-' = -, w = -, b = -, p = -, ~i = -. vl v v v v (6) 1 A margin error Xi is defined to be any point for which €i > 0 (see [1]). 2In fact we can prove that, even if the optimal solution is not unique, the global solutions still all have w = 0: see Burges and Crisp, "Uniqueness of the SYM Solution" in this volume. 246 D. J. Crisp and C. J. C. Burges This gives the equivalent formulation: minimize (7) with respect to w, b, p, ~i' subject to: (8) IT we use as decision function f(x) = f'(x)/v, the formulation is exactly equivalent, although both primal and dual appear different. The dual problem is now: minimize (9) with respect to the ai, subject to: (10) with w given by w = 1 2:i aiYiXi. In the following, we will refer to the reparameterized version of v-StrM given above as J.'-SVM, although we emphasize that it describes the same problem. 3 A Geometric Interpretation of l/-SVM In the separable case, it is clear that the optimal separating hyperplane is just that hyperplane which bisects the shortest vector joining the convex hulls of the positive and negative polarity points3 • We now show that this geometric interpretation can be extended to the case of v-SVM for both separable and nonseparable cases. 3.1 The Separable Case We start by giving the analysis for the separable case. The convex hulls of the two classes are (11) and (12) Finding the two closest points can be written as the following optimization problem: min CIt (13) 3See, for example, K. Bennett, 1997, in http://www.rpi.edu/bennek/svmtalk.ps (also, to appear). A Geometric Interpretation of v-SVM Classifiers 247 subject to: L ai = 1, L ai = 1, a ' > 0 t _ (14) i:y;=+l i:y;=-l Taking the decision boundary j(x) = w· x + b = 0 to be the perpendicular bisector of the line segment joining the two closest points means that at the solution, (15) and b = -w· p, where (16) Thus w lies along the line segment (and is half its size) and p is the midpoint of the line segment. By rescaling the objective function and using the class labels Yi = ±1 we can rewrite this as4 : subject to The associated decision function is j( x) = w . x + b where w = ~ L:i aiYiXi, p = ~ L:i aiXi and b = -w.p = -t L:ij aiYiajXi . Xj. 3.2 The Connection with v-SVM Consider now the two sets of points defined by: H+ JJ = { '. ~ aiXil .. ~ ai = 1, 0 ~ ai ~ fL} I.y;-+l I.y.-+l and We have the following simple proposition: (17) (18) (19) (20) Proposition 1: H+ JJ C H+ and H-JJ C H_, and H+ JJ and H-JJ are both convex sets. Furthermore, the positions of the points H+ JJ and H-JJ with respect to the Xi do not depend on the choice of origin. Proof: Clearly, since the ai defined in H+ JJ is a subset of the ai defined in H+, H+ JJ C H+, similarly for H_. Now consider two points in H+JJ defined by aI, a2. Then all points on the line joining these two points can be written as L:i:y;=+l ((1A)ali + Aa2i)Xi, 0 ~ A ~ 1. Since ali and a2i both satisfy 0 ~ ai ~ fL, so does (1- A)ali +Aa2i, and since also L:i:y;=+l (1- A)ali+Aa2i = 1, the set H+ JJ is convex. 4That one can rescale the objective function without changing the constraints follows from uniqueness of the solution. See also Burges and Crisp, "Uniqueness of the SVM Solution" in this volume. 248 D. J. Crisp and C. J. C. Burges The argument for H_~ is similar. Finally, suppose that every Xi is translated by Xo, i.e. Xi -+ Xi + Xo 'Vi. Then since L:i:Yi=+l ai = 1, every point in H+~ is also translated by the same amount, similarly for H-w 0 The problem of finding the optimal separating hyperplane between the convex sets H+~ and H_~ then becomes: (21) subject to (22) Since Eqs. (21) and (22) are identical to (9) and (10), we see that the v-SVM algorithm is in fact finding the optimal separating hyperplane between the convex sets H+~ and H-w We note that the convex sets H+~ and H_~ are not simply uniformly scaled versions of H + and H _. An example is shown in Figure 1. xl xl 1'=113 1'=5112 1/3 ...... '! 5::: :"::.~ xl .' xl 113 xl 116 5112 x2 112 -lo:rrrTTT17TTT17~ xl xl --t----"I---+-----. 112 xl Figure 1: The soft convex hull for the vertices of a right isosceles triangle, for various 1'. Note how the shape changes as the set grows and is constrained by the boundaries of the encapsulating convex hull. For I' < ~, the set is empty. Below, we will refer to the formulation given in this section as the soft convex hull formulation, and the sets of points defined in Eqs. (19) and (20) as soft convex hulls. 3.3 Comparing the Offsets and Margin Widths The natural value of the offset b in the soft convex hull approach, b = -w . p, arose by asking that the separating hyperplane lie halfway between the closest extremities of the two soft convex hulls. Different choices of b just amount to hyperplanes with the same normal but at different perpendicular distances from the origin. This value of b will not in general be the same as that for which the cost term in Eq. (7) is minimized. We can compare the two values as follows. The KKT conditions for the J.'-SVM formulation are (I' ai)~i 0 ai(Yi(w·Xi+b)-p+~i) 0 Multiplying (24) by Yi, summing over i and using (23) gives (23) (24) A Geometric Interpretation ofv-SVM Classifiers 249 (25) Thus the separating hyperplane found in the J.'-SVM algorithm sits a perpendicular distance 12ifiorr l:i Yi~i I away from that found in the soft convex hull formulation. For the given w, this choice of b results in the lowest value of the cost, J.' l:i ~i. The soft convex hull approach suggests taking p = w . w, since this is the value Iii takes at the points l:Yi=+l (XiXi and l:Yi=-l (XiXi. Again, we can use the KKT conditions to compare this with p. Summing (24) over i and using (23) gives p= p+ ~ L~i. i (26) Since p = W· w, this again shows that if p = 0 then w = ~i = 0, and, by (25), b = O. 3.4 The Primal for the Soft Convex Hull Formulation By substituting (25) and (26) into the J.'-SVM primal formulation (7) and (8) we obtain the primal formulation for the soft convex hull problem: minimize (27) with respect to w, b, p, ~i, subject to: ( -) _ " 1 + YiYj Yi W • Xi + b 2:: p ~i + J.' ~ 2 ~j, j (28) It is straightforward to check that the dual is exactly (9) and (10). Moreover, by summing the relevant KKT conditions, as above, we see that b = -w·p and p = w·w. Note that in this formulation the variables ~i retain their meaning according to (8). 4 Choosing v In this section we establish some results on the choices for v, using the J.'-SVM formulation. First, note that l:i (XiYi = 0 and l:i (Xi = 2 implies l:i:Yi=+l (Xi = l:i:Yi=-l (Xi = 1. Then (Xi 2:: 0 gives (Xi ~ 1, Vi. Thus choosing J.' > 1, which corresponds to choosing v < 2/1, results in the same solution of the dual (and hence the same normal w) as choosing J.' = 1. (Note that different values of J.' > 1 can still result in different values of the other primal variables, e.g. b). The equalities l:i:Yi=+l (Xi = l:i:y;=-l (Xi = 1 also show that if J.' < 2/1 then the feasible region for the dual is empty and hence the problem is insoluble. This corresponds to the requirement v < 1. However, we can improve upon this. Let 1+ (L) be the number of positive (negative) polarity points, so that 1+ + L = I. Let lmin == min{I+,L}. Then the minimal value of J.' which still results in a nonempty feasible region is J.'min = 1/lmin. This gives the condition v ~ 2Imin/l. We define a "nontrivial" solution of the problem to be any solution with w =I o. The following proposition gives conditions for the existence of nontrivial solutions. 250 D. J. Crisp and C. J. C. Burges Proposition 2: A value of v exists which will result in a nontrivial solution to the v-SVM classification problem if and only if {H+I-' : I-' = I-'min} n {H_I-' : I-' = I-'min} = 0. Proof: Suppose that {H+I-' : I-' = I-'min} n {H_I-' : I-' = I-'min} =1= 0. Then for all allowable values of I-' (and hence v), the two convex hulls will intersect, since {H+I-' : I-' = I-'min} C {H+I-' : I-' ~ I-'min} and {H_I-' : I-' = I-'min} C {H_I-' : I-' ~ I-'min}. IT the two convex hulls intersect, then the solution is trivial, since by definition there then exist feasible points z such that z = Li:Yi=+lOiXi and z = Li:Yi=_lOiXi, and hence 2w = Li 0iYiXi = Li:Yi=+lOiXi - Li:Yi=-l 0iXi = 0 (cf. (21), (22). Now suppose that {H+I-' : I-' = I-'min} n {H_I-' : I-' = I-'min} = 0. Then clearly a nontrivial solution exists, since the shortest distance between the two convex sets {H +1-' : I-' = I-'min} and {H -I-' : I-' = I-'min} is not zero, hence the corresponding w =1= o. 0 Note that when 1+ = L, the condition amounts to the requirement that the centroid of the positive examples does not coincide with that of the negative examples. Note also that this shows that, given a data set, one can find a lower bound on v, by finding the largest I-' that satisfies H_I-' n H+I-' = 0. 5 Discussion The soft convex hull interpretation suggests that an appropriate way to penalize positive polarity errors differently from negative is to replace the sum I-' Li ~i in (7) with 1-'+ Li:Yi=+l ~i + 1-'- Li:Yi=-l ~i· In fact one can go further and introduce a I-' for every train point. The I-'-SVM formulation makes this possibility explicit, which it is not in original v-SVM formulation. Note also that the fact that v-SVM leads to values of b which differ from that which would place the optimal hyperplane halfway between the soft convex hulls suggests that there may be principled methods for choosing the best b for a given problem, other than that dictated by minimizing the sum of the ~i 'so Indeed, originally, the sum of ~i 's term arose in an attempt to approximate the number of errors on the train set [21. The above reasoning in a sense separates the justification for w from that for b. For example, given w, a simple line search could be used to find that value of b which actually does minimize the number of errors on the train set. Other methods (for example, minimizing the estimated Bayes error [3]) may also prove useful. Acknowledgments C. Burges wishes to thank W. Keasler, V. Lawrence and C. Nohl of Lucent Technologies for their support. References [1] B. Scholkopf and A. Smola and R. Williamson and P. Bartlett. New support vector algorithms, neurocolt2 nc2-tr-1998-031. Technical report, GMD First and Australian National University, 1998. [2] C. Cortes and V. Vapnik. Support vector networks. Machine Learning, 20:273-297, 1995. [3] C. J. C. Burges and B. SchOlkopf. Improving the accuracy and speed of support vector learning machines. In M. Mozer, M. Jordan, and T. Petsche, editors, Advances in Neural Information Processing Systems 9, pages 375-381, Cambridge, MA, 1997. MIT Press.
|
1999
|
72
|
1,723
|
Robust Learning of Chaotic Attractors Rembrandt Bakker* Chemical Reactor Engineering Delft Univ. of Technology r.bakker@stm.tudelft·nl Floris Takens Dept. Mathematics University of Groningen F. Takens@math.rug.nl Jaap C. Schouten Chemical Reactor Engineering Eindhoven Univ. of Technology J.C.Schouten@tue.nl C. Lee Giles NEC Research Institute Princeton Nl giles@research.nj.nec.com Abstract Marc-Olivier Coppens Chemical Reactor Engineering Delft Univ. of Technology coppens@stm.tudelft·nl Cor M. van den Bleek Chemical Reactor Engineering Delft Univ. of Technology vdbleek@stm.tudelft·nl A fundamental problem with the modeling of chaotic time series data is that minimizing short-term prediction errors does not guarantee a match between the reconstructed attractors of model and experiments. We introduce a modeling paradigm that simultaneously learns to short-tenn predict and to locate the outlines of the attractor by a new way of nonlinear principal component analysis. Closed-loop predictions are constrained to stay within these outlines, to prevent divergence from the attractor. Learning is exceptionally fast: parameter estimation for the 1000 sample laser data from the 1991 Santa Fe time series competition took less than a minute on a 166 MHz Pentium PC. 1 Introduction We focus on the following objective: given a set of experimental data and the assumption that it was produced by a deterministic chaotic system, find a set of model equations that will produce a time-series with identical chaotic characteristics, having the same chaotic attractor. The common approach consists oftwo steps: (1) identify a model that makes accurate shorttenn predictions; and (2) generate a long time-series with the model and compare the nonlinear-dynamic characteristics of this time-series with the original, measured time-series. Principe et al. [1] found that in many cases the model can make good short-tenn predictions but does not learn the chaotic attractor. The method would be greatly improved if we could minimize directly the difference between the reconstructed attractors of the model-generated and measured data, instead of minimizing prediction errors. However, we cannot reconstruct the attractor without first having a prediction model. Until now research has focused on how to optimize both step 1 and step 2. For example, it is important to optimize the prediction horizon of the model [2] and to reduce complexity as much as possible. This way it was possible to learn the attractor of the benchmark laser time series data from the 1991 Santa Fe *DelftChemTech, Chemical Reactor Engineering Lab, lulianalaan 136, 2628 BL, Delft, The Netherlands; http://www.cpt.stm.tudelft.nllcptlcre!researchlbakker/. 880 R. Bakker. J. C. Schouten. M.-Q. Coppens. F. Takens. C. L. Giles and C. M. v. d. Bleek time series competition. While training a neural network for this problem, we noticed [3] that the attractor of the model fluctuated from a good match to a complete mismatch from one iteration to another. We were able to circumvent this problem by selecting exactly that model that matches the attractor. However, after carrying out more simulations we found that what we neglected as an unfortunate phenomenon [3] is really a fundamental limitation of current approaches. An important development is the work of Principe et al. [4] who use Kohonen Self Organizing Maps (SOMs) to create a discrete representation of the state space of the system. This creates a partitioning of the input space that becomes an infrastructure for local (linear) model construction. This partitioning enables to verify if the model input is near the original data (i. e. , detect if the model is not extrapolating) without keeping the training data set with the model. We propose a different partitioning of the input space that can be used to (i) learn the outlines of the chaotic attractor by means of a new way of nonlinear Principal Component Analysis (PCA), and (ii) enforce the model never to predict outside these outlines. The nonlinear PCA algorithm is inspired by the work of Kambhatla and Leen [5] on local PCA: they partition the input space and perform local PCA in each region. Unfortunately, this introduces discontinuities between neighboring regions. We resolve them by introducing a hierarchical partitioning algorithm that uses fuzzy boundaries between the regions. This partitioning closely resembles the hierarchical mixtures of experts of Jordan and Jacobs [6]. In Sec. 2 we put forward the fundamental problem that arises when trying to learn a chaotic attractor by creating a short-term prediction model. In Sec. 3 we describe the proposed partitioning algorithm. In Sec. 4 it is outlined how this partitioning can be used to learn the outline of the attractor by defining a potential that measures the distance to the attractor. In Sec. 5 we show modeling results on a toy example, the logistic map, and on a more serious problem, the laser data from the 1991 Santa Fe time series competition. Section 6 concludes. 2 The attractor learning dilemma Imagine an experimental system with a chaotic attractor, and a time-series of noise-free measurements taken from this system. The data is used to fit the parameters of the model ;.1 =FwC; ';_I"" ,Zt-m) whereF is a nonlinear function, w contains its adjustable parameters and m is a positive constant. What happens if we fit the parameters w by nonlinear least squares regression? Will the model be stable, i.e. , will the closed-loop long term prediction converge to the same attractor as the one represented by the measurements? Figure 1 shows the result of a test by Diks et al. [7] that compares the difference between the model and measured attractor. The figure shows that while the neural network is trained to predict chaotic data, the model quickly converges to the measured attractor 20 t-----;r-----r-.....,.1t_:_----.----.-----i-,----i-,----, (S=O), but once in a while, from one iteration to another, the match between the attractors is lost. To understand what causes this instability, imagine that we try to fit the parameters of a model ;.1 = ii + B Zt while the real system has a point attractor, Z = a, where Z is the state of the system and a its attracting value. Clearly, measurements taken from this system contain no information to .. . -~ ..... ----. -. ---' ... -.. .... . - - , , , , 15 .............. ~ ..... .. , , , , , , , , .... ' I : ;;;'10 .............. ~ ..... .. • • _ _ L __ • • • _ •• • • • __ -' _____ • • •• • •• • , , · , · , · , · . 5 _ .. -----.-----: -_. - .~.L »......... ~I 0'-··· .. · '. .- -_ .. _-- -..... .. . .... _· . , , , , Lo..: o 8000 training progress leg iterations] Figure 1: Diks test monitoring curve for a neural network model trained on data from an experimental chaotic pendulum [3]. Robust Learning of Chaotic Attractors 881 estimate both ii and B. If we fit the model parameters with non-robust linear least squares, B may be assigned any value and if its largest eigenvalue happens to be greater than zero, the model will be unstable! For the linear model this problem has been solved a long time ago with the introduction of singular value decomposition. There still is a need for a nonlinear counterpart of this technique, in particular since we have to work with very flexible models that are designed to fit a wide variety of nonlinear shapes, see for example the early work of Lapedes and Farber [8]. It is already common practice to control the complexity of nonlinear models by pruning or regularization. Unfortunately, these methods do not always solve the attractor learning problem, since there is a good chance that a nonlinear term explains a lot of variance in one part of the state space, while it causes instability of the attractor (without affecting the one-stepahead prediction accuracy) elsewhere. In Secs. 3 and 4 we will introduce a new method for nonlinear principal component analysis that will detect and prevent unstable behavior. 3. The split and fit algorithm The nonlinear regression procedure of this section will form the basis of the nonlinear principal component algorithm in Sec. 4. It consists of (i) a partitioning of the input space, (ii) a local linear model for each region, and (iii) fuzzy boundaries between regions to ensure global smoothness. The partitioning scheme is outlined in Procedure 1: Procedure 1: Partitioning the input space 1) Start with the entire set Z of input data 2) Determine the direction of largest variance of Z: perform a singular value decomposition of Z into the product ULVT and take the eigenvector (column of V) with the largest singular value (on the diagonal of EJ. 3) Split the data in two subsets (to be called: clusters) by creating a plane perpendicular to the direction of largest variance, through the center of gravity of Z. 4) Next, select the cluster with the largest sum squared error to be split next, and recursively apply 2-4 until a stopping criteria is met. Figures 2 and 3 show examples of the partitioning. The disadvantage of dividing regression problems into localized subproblems was pointed out by Jordan and Jacobs [6]: the spread of the data in each region will be much smaller than the spread of the data as a whole, and this will increase the variance of the model parameters. Since we always split perpendicular to the direction of maximum variance, this problem is minimized. The partitioning can be written as a binary tree, with each non-terminal node being a split and each terminal node a cluster. Procedure 2 creates fuzzy boundaries between the clusters. Procedure 2. Creating fuzzy boundaries 1) An input i enters the tree at the top of the partitioning tree. 2) The Euclidean distance to the splitting hyperplane is divided by the bandwidth f3 of the split, and passed through a sigmoidal function with range [0,1]. This results in i's share 0 in the subset on z's side of the splitting plane. The share in the other subset is I-a. 3) The previous step is carried out for all non-terminal nodes of the tree. 882 R. Bakker. J. C. Schouten, M.-Q. Coppens, F. Takens, C. L. Giles and C. M. v. d. Bleek 4) The membership Pc of z to subset (terminal node) c is computed by taking the product of all previously computed shares 0 along the path from the terminal node to the top of the tree. If we would make all parameters adjustable, that is (i) the orientation of the splitting hyperplanes, (ii) the bandwidths f3, and (iii) the local linear model parameters, the above model structure would be identical to the hierarchical mixtures of experts of Jordan and Jacobs [6]. However, we already fixed the hyperplanes and use Procedure 3 to compute the bandwidths: Procedure 3. Computing the Bandwidths 1) The bandwidths of the terminal nodes are taken to be a constant (we use 1.65, the 90% confidence limit of a normal distribution) times the variance of the subset before it was last split, in the direction of the eigenvector of that last split. 2) The other bandwidths do depend on the input z. They are computed by climbing upward in the tree. The bandwidth of node n is computed as a weighted sum between the fJs of its right and left child, by the implicit formula Pn=OL PL uR PR' in which uL and OR depend on Pn· Starting from initial guess Pn=PL if oL>O·5, or else Pn=PR' the formula is solved in a few iterations. This procedure is designed to create large overlap between neighboring regions and almost no overlap between non-neighboring regions. What remains to be fitted is the set of the local linear models. The j-th output of the split&fit model for a given input zp is computed: c Yj,p = L fl; {ii;zp +b/}. where iicand bC contain the linear model parameters of subset c, c=J and C is the number of clusters. We can determine the parameters of all local linear models in one global fit that is linear in the parameters. However, we prefer to locally optimize the parameters for two reasons: (i) it makes it possible to locally control the stability of the attractor and do the principal component analysis of Sec. 4; and (ii) the computing time for a linear regression problem with r regressors scales -O(~) . If we would adopt global fitting, r would scale linearly with C and, while growing the model, the regression problem would quickly become intractable. We use the following iterative local fitting procedure instead. Procedure 4. Iterative Local Fitting 1) Initialize a J by N matrix of residuals R to zero, J being the number of outputs and N the number of data. 2) For cluster c, if an estimate for its linear model parameters already exists, for each input vector z add flcv. to the matrix of residuals, otherwise add c p JY l,p . flpYj,p to R, Yj.P being the j-th element of the deSIred output vector for sample p. 3) Least squares fit the linear model parameters of cluster c to predict the current residuals R, and subtract the (new) estimate, fl;Y;,p' from R. 4) Do 2-4 for each cluster and repeat the fitting several times (default: 3). From simulations we found that the above fast optimization method converges to the global minimum if it is repeated many times. Just as with neural network training, it is often better to use early stopping when the prediction error on an independent test set starts to increase. Robust Learning o/Chaotic Attractors 883 4. Nonlinear Principal Component Analysis To learn a chaotic attractor from a single experimental time-series we use the method of delays: the state l consists of m delays taken from the time series. The embedding dimension m must be chosen large enough to ensure that it contains sufficient infonnation for faithful reconstruction of the chaotic attractor, see Takens [9]. Typically, this results in an mdimensional state space with all the measurents covering only a much lower dimensional, but non-linearly shaped, subspace. This creates the danger pointed out in Sec. 2: the stability of the model in directions perpendicular to this low dimensional subspace cannot be guaranteed. With the split & fit algorithm from Sec. 3 we can learn the non-linear shape of the low dimensional subspace, and, if the state of the system escapes from this subspace, we use the algorithm to redirect the state to the nearest point on the subspace. See Malthouse [10] for limitations of existing nonlinear peA approaches. To obtain the low dimensional subspace, we proceed according to Procedure 5. Procedure 5. Learning the Low-dimensional Subspace 1) Augment the output of the model with the m-dimensional statel: the model will learn to predict its own input. 2) In each cluster c, perfonn a singular value decomposition to create a set of m principal directions, sorted in order of decreasing explained variance. The result of this decomposition is also used in step 3 of Procedure 4. 3) Allow the local linear model of each cluster to use no more than mred of these principal directions. 4) Define a potential P to be the squared Euclidian distance between the state l and its prediction by the model. -2 -1 2 The potential P implicitly defines the lower dimensional subspace: if a state l is on the subspace, P will be zero. P will increase with the distance of l from the subspace. The model has learned to predict its own input with small error, meaning that it has tried to reduce P as much as possible at exactly those points in state space where the training data was sampled. In other words, P will be low if the input l is close to one of the original points in the training data set. From the split&fit algorithm we can analytically compute the gradient dPldl. Since the evaluation of the split&fit model involves a backward (computing the bandwidths) and forward pass (computing memberships), the gradient algorithm involves a forward and backward pass through the tree. The gradient is used to project states that are off the nonlinear subspace onto the subspace Figure 2. Projecting two-dimensional data on a onedimensional self-intersecting subspace. The colorscale represents the potential P, white indicates P>0.04 .. 884 R. Bakker, J C. Schouten, M.-O. Coppens, F. Takens, C. L. Giles and C. M. v. d. Bleek -1 o X 1 in one or a few Newton-Rhapson iterations. Figure 2 illustrates the algorithm for the problem of creating a one-dimensional representation of the number '8'. The training set consists of 136 clean samples, Xl and Fig. 2 shows how a set of 272 noisy inputs is projected by a 48 subset split&fit model onto the one-dimensional subspace. Note that the center of the '8' cannot be well represented by a one-dimensional space. We leave development of an algorithm that automatically detects the optimum local subspace dimension for future research. Figure 3. Learning the attractor of the twoinput logistic map. The order of creation of the splits is indicated. The colorscale represents the potential P, white indicates P>O.05. 5. Application Examples First we show the nonlinear principal component analysis result for a toy example, the logistic map Zt+l =4zt(1-Zt). If we use a model Zt+l = Fw(zt) , where the prediction only depends on one previous output, there is no lower dimensional space to which the attractor is confined. However, if we allow the output to depend on more than a single delay, we create a possibility for unstable behavior. Figure 3 shows how well the split&fit algorithm learns the one-dimensional shape of the attractor after creating only five regions. The parabola is slightly deformed (seen from the white lines perpendicular to the attractor), but this may be solved by increasing the number of splits. Next we look at the laser data. The complex behavior of chaotic systems is caused by an interplay of destabilizing and stabilizing forces: the destabilizing forces make nearby points in state space diverge, while the stabilizing forces keep the state of the system bounded. This process, known as 'stretching and folding', results in the attractor of the system: the set of points that the state of the system will visit after all transients have died out. In the case of the laser data this behavior is clear cut: destabilizing forces make the signal grow exponentially until the increasing amplitude triggers a collapse that reinitiates the sequence. We have seen in neural network based models [3] and in this study that it is very hard for the models to cope with the sudden collapses. Without the nonlinear subspace correction of Sec. 4, most of the train data (a) 0.4 ,---------+--------------------, (b) 1000 time Figure 4. Laser data from the Santa Fe time series competition. The 1000 sample train data set is followed by iterated prediction of the model (a). After every prediction a correction is made to keep P (see Sec. 4) small. Plot (b) shows P before this correction. Robust Learning of Chaotic Attractors 885 models we tested grow without bounds after one or more rise and collapse sequences. That is not very surprising - the training data set contains only three examples of a collapse. Figure 4 shows how this is solved with the subspace correction: every time the model is about to grow to infinity, a high potential P is detected (depicted in Fig. 3b) and the state of the system is directed to the nearest point on the subspace as learned from the nonlinear principal component analysis. After some trial and error, we selected an embedding dimension m of 12 and a reduced dimension mred of 4. The split&fit model starts with a single dataset, and was grown until 48 subsets. At that point, the error on the 1000 sample train set was still decreasing rapidly but the error on an independent 1000 sample test set increased. We compared the reconstructed attractors of the model and measurements, using 9000 samples of closed-loop generated and 9000 samples of measured data. No significant difference between the two could be detected by the Diles test [7]. 6. Conclusions We present an algorithm that robustly models chaotic attractors. It simultaneously learns (1) to make accurate short term predictions; and (2) the outlines of the attractor. In closed-loop prediction mode, the state of the system is corrected after every prediction, to stay within these outlines. The algorithm is very fast, since the main computation is to least squares fit a set of local linear models. In our implementation the largest matrix to be stored is N by C, N being the number of data and C the number of clusters. We see many applications other than attractor learning: the split&fit algorithm can be used as a fast learning alternative to neural networks and the new form of nonlinear peA will be useful for data reduction and object recognition. We envisage to apply the technique to a wide range of applications, from the control and modeling of chaos in fluid dynamics to problems in finance and biology to fluid dynamics. Acknowledgements This work is supported by the Netherlands Foundation for Chemical Research (SON) with financial aid from the Netherlands Organization for Scientific Research (NWO). References [1] 1.e. Principe, A. Rathie, and 1.M. Kuo. "Prediction of Chaotic Time Series with Neural Networks and the Issue of Dynamic Modeling". Int. J. Bifurcation and Chaos. 2, 1992. P 989. [2] 1.M. Kuo. and 1.C. Principe. "Reconstructed Dynamics and Chaotic Signal Modeling". In Proc. IEEE Int'l Conf. Neural Networks, 5, 1994, p 3l31. [3] R Bakker, J.C. Schouten, e.L. Giles. F. Takens, e.M. van den Bleek, "Learning Chaotic Attractors by Neural Networks", submitted. [4] 1.e. Principe, L. Wang, MA Motter, "Local Dynamic Modeling with Self-Organizing Maps and Applications to Nonlinear System Identification and Control".Proc. IEEE. 86(11). 1998. [5] N. Kambhatla, T.K. Leen. "Dimension Reduction by Local PCA", Neural Computation. 9,1997. p. 1493 [6] M.I. Jordan, RA. Jacobs. "Hierarchical Mixtures of Experts and the EM Algorithm". Neural Compution. 6. 1994. p. 181. [7] e. Diks, W.R. van Zwet. F. Takens. and 1. de Goede, "Detecting differences between delay vector distributions", PhYSical Review E. 53, 1996. p. 2169. [8] A. Lapedes. R Farber. "Nonlinear Signal Processing Using Neural Networks: Prediction and System Modelling". Los Alamos Technical Report LA-UR-87-2662. [9] F. Takens, "Detecting strange attractors in turbulence", Lecture notes in Mathematics, 898, 1981, p. 365. [10] E.C. Malthouse. "Limitations of Nonlinear PCA as performed with Generic Neural Networks. IEEE Trans. Neural Networks. 9(1). 1998. p. 165.
|
1999
|
73
|
1,724
|
Greedy importance sampling Dale Schuurmans Department of Computer Science University of Waterloo dale@cs.uwaterloo.ca Abstract I present a simple variation of importance sampling that explicitly searches for important regions in the target distribution. I prove that the technique yields unbiased estimates, and show empirically it can reduce the variance of standard Monte Carlo estimators. This is achieved by concentrating samples in more significant regions of the sample space. 1 Introduction It is well known that general inference and learning with graphical models is computationally hard [1] and it is therefore necessary to consider restricted architectures [13], or approximate algorithms to perform these tasks [3, 7]. Among the most convenient and successful techniques are stochastic methods which are guaranteed to converge to a correct solution in the limit oflarge samples [10, 11, 12, 15]. These methods can be easily applied to complex inference problems that overwhelm deterministic approaches. The family of stochastic inference methods can be grouped into the independent Monte Carlo methods (importance sampling and rejection sampling [4, 10, 14]) and the dependent Markov Chain Monte Carlo (MCMC) methods (Gibbs sampling, Metropolis sampling, and "hybrid" Monte Carlo) [5, 10, 11, 15]. The goal of all these methods is to simulate drawing a random sample from a target distribution P (x) (generally defined by a Bayesian network or graphical model) that is difficult to sample from directly. This paper investigates a simple modification of importance sampling that demonstrates some advantages over independent and dependent-Markov-chain methods. The idea is to explicitly search for important regions in a target distribution P when sampling from a simpler proposal distribution Q. Some MCMC methods, such as Metropolis and "hybrid" Monte Carlo, attempt to do something like this by biasing a local random search towards higher probability regions, while preserving the asymptotic "fair sampling" properties of the exploration [11, 12]. Here I investigate a simple direct approach where one draws points from a proposal distribution Q but then explicitly searches in P to find points from significant regions. The main challenge is to maintain correctness (i.e., unbiased ness) of the resulting procedure, which we achieve by independently sampling search subsequences and then weighting the sample points so that their expected weight under the proposal distribution Q matches their true probability under the target P. Greedy Importance Sampling Importance sampling • Draw Xl , ... , X n independently from Q. • Weight each point Xi by W(Xi) = ~I::l. • For a random variable, f, estimate Ep (.,) f(x) A I ",n by f = n L.Ji=I f(Xi)W(Xi). "Indirect" importance sampling • Draw Xl, ... ,xn independently from Q. .Weighteachpointxibyu(xi) = (3~~i/. 597 • For a random variable, f, estimate Ep(.,)f(x) A ",n ",n by f = L.Ji=I f(Xi)U(Xi)/ L.Ji=I U(Xi). Figure 1: Regular and "indirect" importance sampling procedures 2 Generalized importance sampling Many inference problems in graphical models can be cast as determining the expected value of a random variable of interest, f, given observations drawn according to a target distribution P. That is, we are interested in computing the expectation Ep(x) f(x). Usually the random variable f is simple, like the indicator of some event, but the distribution P is generally not in a form that we can sample from efficiently. Importance sampling is a useful technique for estimating Ep(x) f (x) in these cases. The idea is to draw independent points xl, .. " Xn from a simpler "proposal" distribution Q, but then weight these points by w(x) = P(x)/Q(x) to obtain a "fair" representation of P. Assuming that we can efficiently evaluate P(x) at each point, the weighted sample can be used to estimate desired expectations (Figure 1). The correctness (i.e., unbiasedness) of this procedure is easy to establish, since the expected weighted value of f under Q is just Eq(x)f(x)w(x) = EXEX [f(x)w(x)] Q(x) = Ex EX [f(x)~t:n Q(x) = EXEX f(x)P(x) = Ep(x)f(x), This technique can be implemented using "indirect" weights u( x) = f3P (x) / Q ( x) and an alternative estimator (Figure 1) that only requires us to compute a fixed multiple of P (x). This preserves asymptotic correctness because ~ E7=1 f(xdu(xd and ~ E?=l U(Xi) converge to f3Ep(x)f(x) and f3 respectively, which yields j -t Ep(x)f(x) (generally [4]). It will always be possible to apply this extended approach below, but we drop it for now. Importance sampling is an effective estimation technique when Q approximates P over most of the domain, but it fails when Q misses high probability regions of P and systematically yields samples with small weights. In this case, the reSUlting estimator will have high variance because the sample will almost always contain unrepresentative points but is sometimes dominated by a few high weight points. To overcome this problem it is critical to obtain data points from the important regions of P. Our goal is to avoid generating systematically under-weight samples by explicitly searching for significant regions in the target distribution P. To do this, and maintain the unbiased ness of the resulting procedure, we develop a series of extensions to importance sampling that are each provably correct. The first extension is to consider sampling blocks of points instead of just individual points. Let B be a partition of X into finite blocks B, where UBE8 B = X, B n B' = 0, and each B is finite. (Note that B can be infinite.) The "block" sampling procedure (Figure 2) draws independent blocks of points to construct the final sample, but then weights points by their target probability P(x) divided by the total block probability Q (B (x)), For discrete spaces it is easy to verify that this procedure yields unbiased estimates, since Eq(x) [EXjEB(X) f(xj)w(Xj)] = EXEX [L:XjEB(X) f(xj)w(Xj)] Q(x) = EBE8 EXiEB [EXjEB f(xj)w(Xj)] Q(Xi) = EBE8 [EXjEB f(xj)w(Xj)] Q(B) = EBE8 [EXjEB f(xj) ~«~n Q(B) = EBE8 [EXjEB f(xj )P(Xj)] = L:xEx f(x)P(x). 598 "Block" importance sampling • Draw Xl , ... , Xn independently from Q. • For Xi, recover block Bi = {Xi,l, ... ,Xi,bJ. • Create a large sample out of the blocks Xl ,1, ... , Xl ,bl , X2,1 , ... , X2,b2' ••• , Xn,l, ... , Xn ,b" . • Weighteachx ' ' byw(x.') = P(zi,j) I ,} I,} ",oi Q(z,,)' L.Jj=l ' ,J • For a random variable, f, estimate Ep(z) f(x) by j = ~ 2:~=1 2:~~1 f(Xi ,j)W(Xi,j). D. Schuurmans "Sliding window" importance sampling • Draw Xl, ... ,xn independently from Q. • For Xi , recover block Bi , and let Xi,l = Xi: - Get Xi,l 'S successors Xi,l, Xi,2, ... , Xi,m by climbing up m - 1 steps from Xi,l . - Get predecessorsxi,_m+l" ... ,Xi,-l , Xi,O by climbing down m - 1 steps from Xi,l . - Weight W(Xi ,j)= P(Xi ,i)/2:!=i_m+lQ(x; ,k) • Create final sample from successor points XI ,I, ... , Xl ,m, X2,1 , ••. , X2 ,m, •.. , Xn ,l, ... , Xn ,m. • For a random variable, f, estimate Ep(z) f(x) A 1 ",n ",m by f = n 6i=1 6j=1 f(Xi,j)W(Xi,j) . Figure 2: "Block" and "sliding window" importance sampling procedures Crucially, this argument does not depend on how the partition of X is chosen. In fact, we could fix any partition, even one that depended on the target distribution P, and still obtain an unbiased procedure (so long as the partition remains fixed). Intuitively, this works because blocks are drawn independently from Q and the weighting scheme still produces a "fair" representation of P . (Note that the results presented in this paper can all be extended to continuous spaces under mild technical restrictions. However, for the purposes of clarity we will restrict the technical presentation in this paper to the discrete case.) The second extension is to allow countably infinite blocks that each have a discrete total order . . . < Xi -1 < Xi < Xi +1 < .. . defined on their elements. This order could reflect the relative probability of Xi and X j under P, but for now we just consider it to be an arbitrary discrete order. To cope with blocks of unbounded length, we employ a "sliding window" sampling procedure that selects a contiguous sub-block of size m from within a larger selected block (Figure 2). This procedure builds each independent subsample by choosing a random point Xl from the proposal distribution Q, determining its containing block B(xt), and then climbing up m - 1 steps to obtain the successors Xl, X2, ••. , Xm , and climbing down m 1 steps to obtain the predecessors X- m +1 , ... , X-I, Xo . The successor points (including Xl) appear in the final sample, but the predecessors are only used to determine the weights of the sample points. Weights are determined by the target probability P (x) divided by the probability that the point X appears in a random reconstruction under Q. This too yields an unbiased estimator sinceEQ(x) [2:~lf(xj)w(Xj)] = 2:XtEX [2:~;~-1 f(xj)2:~=j~~:: Q(Xk)] Q(Xl ) = '" 2: 2:l +m - 1 f(xj)P(Xj)Q(xd - '" 2: 2:j f(xj)P(Xj)Q(xt} 6BEB x t EB j=l "'J. Q(Xk) - 6BEB Xj EB l=j-m+1 "'J . Q(Xk) 6k=J-m+l 6k=J-m+l 2:BEB2:x j EBf(xj )P(Xj )Ei::=::: ~;::~ = 2:BEB2:xjEBf(xj )P(Xj)= 2:xEx f(x)P(x). (The middle line breaks the sum into disjoint blocks and then reorders the sum so that instead of first choosing the start point Xl and then XL'S successors Xl, .. • , Xl+m-l. we first choose the successor point Xj and then the start points Xj-m+1 , ... , Xj that could have led to Xj). Note that this derivation does not depend on the particular block partition nor on the particular discrete orderings, so long as they remain fixed. This means that, again, we can use partitions and orderings that explicitly depend on P and still obtain a correct procedure. Greedy Importance Sampling 599 "Greedy" importance sampling (I-D) If(x)P(x)1 collision e Draw Xl , ... , Xn independently from Q. eForeachxi , letxi,l =Xi: - Compute successors Xi,l, Xi,2, ... ,Xi,m by taking m - 1 size € steps in the direction of increase. - Compute predecessors Xi,-m+l, ... ,Xi,-l ,Xi,O by taking m -1 size € steps in the direction of decrease. - If an improper ascent or descent occurs, x· x" truncate paths as shown on the upper right. - Weightw(xi,j) = P(Xi ,j)/L:~=j_m+l Q(Xi,k). k ~ e Create the final sample from successor points if 6 "\, XI ,l, ••• , XI ,m , X'2 ,I , ••• , X2 ,m , ••• ,Xn,l, ••• , ::tn,me For a random variable, f, estimate Ep(z) f(x) A 1 ",n ",m by f = -;; L..ti=l L.Jj=l f(Xi ,j)W(Xi,j). merge Figure 3: "Greedy" importance sampling procedure; "colliding" and "merging" paths. 3 Greedy importance sampling: I-dimensional case Finally, we apply the sliding window procedure to conduct an explicit search for important regions in X. It is well known that the optimal proposal distribution for importance sampling isjust Q* (x) = If(x )P(x)11 EXEX If(x )P(x) 1 (which minimizes variance [2]). Here we apply the sliding window procedure using an order structure that is determined by the objective If(x )P(x )1. The hope is to obtain reduced variance by sampling independent blocks of points where each block (by virtue of being constructed via an explicit search) is likely to contain at least one or two high weight points. That is, by capturing a moderate size sample of independent high weight points we intuitively expect to outperform standard methods that are unlikely to observe such points by chance. Our experiments below verify this intuition (Figure 4). The main technical issue is maintaining unbiasedness, which is easy to establish in the 1dimensional case. In the simple I-d setting, the "greedy" importance sampling procedure (Figure 3) first draws an initial point Xl from Q and then follows the direction of increasing If(x)P(x)l, taking fixed size € steps, until either m - 1 steps have been taken or we encounter a critical point. A single "block" in our final sample is comprised of a complete sequence captured in one ascending search. To weight the sample points we account for all possible ways each point could appear in a subsample, which, as before, entails climbing down m-l steps in the descent direction (to calculate the denominators). The unbiasedness of the procedure then follows directly from the previous section, since greedy importance sampling is equivalent to sliding window importance sampling in this setting. The only nontrivial issue is to maintain disjoint search paths. Note that a search path must terminate whenever it steps from a point x· to a point x** with lower value; this indicates that a collision has occurred because some other path must reach x· from the "other side" of the critical point (Figure 3). At a collision, the largest ascent point x· must be allocated to a single path. A reasonable policy is to allocate x· to the path that has the lowest weight penultimate point (but the only critical issue is ensuring that it gets assigned to a single block). By ensuring that the critical point is included in only one of the two distinct search paths, a practical estimator can be obtained that exhibits no bias (Figure 4). To test the effectiveness of the greedy approach I conducted several I-dimensional experiments which varied the relationship between P, Q and the random variable f (Figure 4). In 600 D. Schuurmans these experiments greedy importance sampling strongly outperformed standard methods, including regular importance sampling and directly sampling from the target distribution P (rejection sampling and Metropolis sampling were not competitive). The results not only verify the unbiasedness of the greedy procedure, but also show that it obtains significantly smaller variances across a wide range of conditions. Note that the greedy procedure actually uses m out of 2m - 1 points sampled for each block and therefore effectively uses a double sample. However, Figure 4 shows that the greedy approach often obtains variance reductions that are far greater than 2 (which corresponds to a standard deviation reduction of V2). 4 Multi-dimensional case Of course, this technique is worthwhile only if it can be applied to multi-dimensional problems. In principle, it is straightforward to apply the greedy procedure of Section 3 to multi-dimensional sample spaces. The only new issue is that discrete search paths can now possibly "merge" as well as "collide"; see Figure 3. (Recall that paths could not merge in the previous case.) Therefore, instead of decomposing the domain into a collection of disjoint search paths, the objective If(x)P(x)1 now decomposes the domain into a forest of disjoint search trees. However, the same principle could be used to devise an unbiased estimator in this case: one could assign a weight to a sample point x that is just its target probability P (x) divided by the total Q-probability of the subtree of points that lead to x in fewer than m steps. This weighting scheme can be shown to yield an unbiased estimator as before. However, the resulting procedure is impractical because in an N-dimensional sample space a search tree will typically have a branching factor of n(N); yielding exponentially large trees. Avoiding the need to exhaustively examine such trees is the critical issue in applying the greedy approach to multi-dimensional spaces. The simplest conceivable strategy is just to ignore merge events. Surprisingly, this turns out to work reasonably well in many circumstances. Note that merges will be a measure zero event in many continuous domains. In such cases one could hope to ignore merges and trust that the probability of "double counting" such points would remain near zero. I conducted simple experiments with a version of greedy importance sampling procedure that ignored merges. This procedure searched in the gradient ascent direction of the objective If{x)p{x)1 and heuristically inverted search steps by climbing in the gradient descent direction. Figures 5 and 6 show that, despite the heuristic nature of this procedure, it nevertheless demonstrates credible performance on simple tasks. The first experiment is a simple demonstration from [12, 10] where the task is to sample from a bivariate Gaussian distribution P of two highly correlated random variables using a "weak" proposal distribution Q that is standard normal (depicted by the elliptical and circular one standard deviation contours in Figure 5 respectively). Greedy importance sampling once again performs very well (Figure 5); achieving unbiased estimates with lower variance than standard Monte Carlo estimators, including common MCMC methods. To conduct a more significant study, I applied the heuristic greedy method to an inference problem in graphical models: recovering the hidden state sequence from a dynamic probabilistic model, given a sequence of observations. Here I considered a simple Kalman filter model which had one state variable and one observation variable per time-step, and used theconditionaldistributionsXtlXt_ l "-' N(Xt_l,O';), ZtlXt "" N(xt,O'~) and initial distribution Xl "-' N(O,O';) . The problem was to infer the value of the final state variable Xt given the observations Zl, Z2, "', Zt. Figure 6 again demonstrates that the greedy approach Greedy Importance Sampling 601 has a strong advantage over standard importance sampling. (In fact, the greedy approach can be applied to "condensation" [6, 8] to obtain further improvements on this task, but space bounds preclude a detailed discussion.) Overall, these preliminary results show that despite the heuristic choices made in this section, the greedy strategy still performs well relative to common Monte Carlo estimators, both in terms of bias and variance (at least on some low and moderate dimension problems). However, the heuristic nature of this procedure makes it extremely unsatisfying. In fact, merge points can easily make up a significant fraction of finite domains. It turns out that a rigorously unbiased and feasible procedure can be obtained as follows. First, take greedy fixed size steps in axis parallel directions (which ensures the steps can be inverted). Then, rather than exhaustively explore an entire predecessor tree to calculate the weights of a sample point, use the well known technique of Knuth [9] to sample a single path from the root and obtain an unbiased estimate of the total Q-probability of the tree. This procedure allows one to formulate an asymptotically unbiased estimator that is nevertheless feasible to implement. It remains important future work to investigate this approach and compare it to other Monte Carlo estimation methods on large dimensional problems-in particular hybrid Monte Carlo [11, 12]. The current results already suggest that the method could have benefits. References [1] P. Dagum and M. Luby. Approximating probabilistic inference in Bayesian belief networks is NP-hard. Artif Intell, 60: 141-153, 1993. [2] M. Evans. Chaining via annealing. Ann Statist, 19:382-393, 1991. [3] B. Frey. Graphical Models for Machine Learning and Digital Communication. MIT Press, Cambridge, MA, 1998. [4] J. Geweke. Baysian inference in econometric models using Monte Carlo integration. Econometrica, 57:1317-1339, 1989. [5] W. Gilks, S. Richardson, and D. Spiegelhalter. Markov chain Monte Carlo in practice. Chapman and Hall, 1996. [6] M. Isard and A. Blake. Coutour tracking by stochastic propagation of conditional density. In ECCV, 1996. [7] M. Jordan, Z. Ghahramani, T. Jaakkola, and L. Saul. An introduction to variational methods for graphical models. In Learning in Graphical Models. Kluwer, 1998. [8] K. Kanazawa, D. Koller, and S. Russell. Stochastic simulation algorithms for dynamic probabilistic networks. In UAl, 1995. [9] D. Knuth. Estimating the efficiency of backtracking algorithms. Math. Comput., 29(129): 121136,1975. [10] D. MacKay. Intro to Monte Carlo methods. In Learning in Graphical Models. Kluwer, 1998. [11] R. Neal. Probabilistic inference using Markov chain Monte Carlo methods. 1993. [12] R. Neal. Bayesian Learning for Neural Networks. Springer, New York, 1996. [13] J. Pearl. Probabilistic Reasoning in Intelligence Systems. Morgan Kaufmann, 1988. [14] R. Shacter and M. Peot. Simulation approaches to general probabilistic inference in belief networks. In Uncertainty in Artificial Intelligence 5. Elsevier, 1990. [15] M. Tanner. Tools for statistical inference: Methods for exploration of posterior distributions and likelihoodfunctions. Springer, New York, 1993. 602 D. Schuurmans . f ' ._.11 {/... . • . . Direc Greed Imprt Direc Greed 1m rt Direc Greed Imrt Direc Greed Imprt mean 0.779 0.781 0.777 1.038 1.044 1.032 0.258 0.208 0.209 6.024 6.028 6.033 bias 0.001 0.001 0.003 0.002 0.003 0.008 0.049 0.000 0.001 0.001 0.004 0.009 stdev 0.071 0.Q38 0.065 0.088 0.049 0.475 0.838 0.010 0.095 0.069 0.037 0.094 Figure 4: I-dimensional experiments: 1000 repetitions on estimation samples of size 100. Problems with varying relationships between P, Q, I and II PI. / .. :~:. . . 'I:' ::'~" ,1 :.~ mean bias stdev Direct 0.1884 0.0022 0.07 Greedy 0.1937 0.0075 0.1374 Importance 0.1810 0.0052 0.1762 Rejection 0.1506 0.0356 0.2868 Gibbs 0.3609 0.1747 0.5464 Metropolis 8.3609 8.1747 22.1212 / .... .. . . , .. Figure 5: 2-dimensional experiments: 500 repetitions on estimation samples of size 200. Pictures depict: direct, greedy importance, regular importance, and Gibbs sampling, showing 1 standard deviation countours (dots are sample points, vertical lines are weights). Importance mean 5.2269 bias 2.7731 stdev 1.2107 Greedy 6.9236 1.0764 0.1079 Figure 6: A 6-dimensional experiment: 500 repetitions on estimation samples of size 200. Estimating the value of Xt given the observations Zl, "" Zt. Pictures depict paths sampled by regular versus greedy importance sampling.
|
1999
|
74
|
1,725
|
Recurrent cortical competition: Strengthen or weaken? Peter Adorjan*, Lars Schwabe, Christian Piepenbrock* , and Klaus Obennayer Dept. of Compo Sci., FR2-I, Technical University Berlin Franklinstrasse 28/29 10587 Berlin, Germany adorjan@epigenomics.com, {schwabe, oby} @cs.tu-berlin.de, piepenbrock@epigenomics.com http://www.ni.cs.tu-berlin.de Abstract We investigate the short term .dynamics of the recurrent competition and neural activity in the primary visual cortex in terms of information processing and in the context of orientation selectivity. We propose that after stimulus onset, the strength of the recurrent excitation decreases due to fast synaptic depression. As a consequence, the network shifts from an initially highly nonlinear to a more linear operating regime. Sharp orientation tuning is established in the first highly competitive phase. In the second and less competitive phase, precise signaling of multiple orientations and long range modulation, e.g., by intra- and inter-areal connections becomes possible (surround effects). Thus the network first extracts the salient features from the stimulus, and then starts to process the details. We show that this signal processing strategy is optimal if the neurons have limited bandwidth and their objective is to transmit the maximum amount of information in any time interval beginning with the stimulus onset. 1 Introduction In the last four decades there has been a vivid and highly polarized discussion about the role of recurrent competition in the primary visual cortex (VI) (see [12] for review). The main question is whether the recurrent excitation sharpens a weakly orientation tuned feedforward input, or the feed-forward input is already sharply tuned, hence the massive recurrent circuitry has a different function. Strong cortical recurrency implements a highly nonlinear mapping of the feed-forward input, and obtains robust and sharply tuned cortical response even if only a weak or no feed-forward orientation bias is present [6, 11, 2]. However, such a competitive network in most cases fails to process mUltiple orientations within the classical receptive field and may signal spurious orientations [7]. This motivates the concept that the primary visual cortex maps an already sharply orientation tuned feed-forward input in a less competitive (more linear) fashion [9, 13]. Although these models for orientation selectivity in VI vary on a wide scale, they have one common feature: each of them assumes that the synaptic strength is constant on the short time scale on which the network operates. Given the phenomenon of fast synaptic *Current address: Epigenomics GmbH, Kastanienallee 24,0-10435 Berlin, Germany 90 P. Adorjan, L. Schwabe, C. Piepenbrock and K. Obermayer dynamics this, however, does not need to be the case. Short term synaptic dynamics, e.g., of the recurrent excitatory synapses would allow a cortical network to operate in bothcompetitive and linear-regimes. We will show below (Section 2) that such a dynamic cortical amplifier network can establish sharp contrast invariant orientation tuning from a broadly tuned feed-forward input, while it is still able to respond correctly to mUltiple orientations. We then show (Section 3) that decreasing the recurrent competition with time naturally follows from functional considerations, i.e. from the requirement that the mutual information between stimuli and representations is maximal for any time interval beginning with stimulus onset. We consider a free-viewing scenario, where the cortical layer represents a series of static images that are flashed onto the retina for a fixation period (~T = 200 - 300 ms) between saccades. We also assume that the spike count in increasing time windows after stimulus onset carries the information. The key observations are that the signal-to-noise ratio of the cortical representation increases with time (because more spikes are available) and that the optimal strength of the recurrent connections (w.r.t. information transfer) decreases with the decreasing output noise. Consequently the model predicts that the information content per spike (or the SNR for ajixed sliding time window) decreases with time for a flashed static stimulus in accordance with recent experimental studies. The neural system thus adapts to its own internal changes by modifying its coding strategy, a phenomenon which one may refer to as "dynamic coding". 2 Cortical amplifier with fast synaptic plasticity To investigate our first hypothesis, we set up a model for an orientation-hypercolumn in the primary visual cortex with similar structure and parameters as in [7]. The important novel feature of our model is that fast synaptic depression is present at the recurrent excitatory connections. Neurons in the cortical layer receive orientation-tuned feed-forward input from the LGN and they are connected via a Mexican-hat shaped recurrent kernel in orientation space. In addition, the recurrent and feed-forward excitatory synapses exhibit fast depression due to the activity dependent depletion of the synaptic transmitter [1, 14]. We compare the response of the cortical amplifier models with and without fast synaptic plasticity at the recurrent excitatory connections to single and mUltiple bars within the classical receptive field. The membrane potential V (0, t) of a cortical cell tuned to an orientation 0 decreases due to the leakage and the recurrent inhibition, and increases due to the recurrent excitation a T at V(O, t) + V(O, t) (1) where T = 15 ms is the membrane time constant and ILGN (0, t) is the input received from the LGN. The recurrent excitatory and inhibitory cortical inputs are given by r:"(O, t) (2) where ~ (Of, 0) is a 1T periodic circular difference between the preferred orientations, JCX(O, Of , t) are the excitato~ and inhibitory connection strengths (with a E {exc, inh}, J~x~x = 0.2 m V /Hz and J:::ax = 0.8m V /Hz), and f is the presynaptic firing rate. The excitatory synaptic efficacy r xc is time dependent due to the fast synaptic depression, while the efficacy of inhibitory synapses Jinh is assumed to be constant. The recurrent excitation is sharply tuned (j exc = 7.50 , while the inhibition has broad tuning (jinh = 90 0 • The mapping from the membrane potential to firing rate is approximated by a linear function with a threshold at 0 (f(O) = ,6max(O, V(O)),,6 = 15Hz/mV). Gaussian-noise with variances Recurrent Cortical Competition: Strengthen or Weaken? 91 Feedforward Input Static Depressing 1 15 15 ,....., N n N > X X '-' '-' E Q) Q) '-' rJJ rJJ .... = , = ::l ,--,., --0 , 0 , go.. j! 0.. rJJ rJJ Q) Q) ~ \ ~ ~90 -45 0 45 90 ~90 -45 0 45 90 ~90 -45 0 45 90 Orientation [deg] Orientation [deg] Orientation [deg] (a) (b) (c) Figure 1: The feed-forward input (a), and the response ofthe cortical amplifier model with static recurrent synaptic strength (b), and a network with fast synaptic depression (c) if the stimulus is single bar with different stimulus contrasts (40%dotted; 60%dashed; 80%solid line). The cortical response is averaged over the first 100 illS after stimulus onset. of 6 Hz and 1.6 Hz is added to the input intensities and to the output of cortical neurons. The orientation tuning curves of the feed-forward input ILGN are Gaussians (O'"LGN = 18°) resting on a strong additive orientation independent component which would correspond to a geniculo-cortical connectivity pattern with an approximate aspect ratio of 1 :2. Both, the orientation dependent and independent components increase with contrast. Considering a free-viewing scenario where the environment is scanned by saccading around and fixating for short periods of 200 - 300 illS we model stationary stimuli present for 300 illS. The stimuli are one or more bars with different orientations. Feed-forward and recurrent excitatory synapses exhibit fast depression. Fast synaptic deI'ression is modeled by the dynamics of the expected synaptic transmitter or "resource" R( t) for each synapse. The amount of the available transmitter decreases proportionally to the release probability p and to the presynaptic firing rate /, and it recovers exponentially (T~~N = 120 illS, Tr~~x = 850 illS, pLGN = 0.35 and pCtx = 0.55), 1 - R(t) R(t) 1 - /(t)p(t)R(t) = (f() ()) + -. Tree Teff t, P t Tree (3) The change of the membrane potential on the postsynaptic cell at time t is proportional to the released transmitter pR(t). The excitatory connectivity strength between neurons tuned to orientations 0 and 0' is expressed as j€xe(o, 0', t) = J:::~xpR991(t). Similarly this applies to the feed-forward synapses. Fast synaptic plasticity at the feed-forward synapses has been investigated in more detail in previous studies [3, 4]. In the following, we compare the predictions of the cortical amplifier model with and without fast synaptic depression at the recurrent excitatory connections. In both cases fast synaptic depression is present at the feed-forward connections limiting the duration of the effective feed-forward input to 200 - 400 illS. Figure 1 shows the orientation tuning curves at different stimulus contrasts. The feed-forward input is noisy and broadly tuned (Fig. la). Both models exhibit contrast invariant tuning (Fig. 1 b, c). If fast synaptic depression is present at the recurrent excitation, the cortical network sharpens the broadly tuned feedforward input in the initial response phase. Once sharply tuned input is established, the tuning width does not change, only the response amplitude decreases in time. The predictions of the two models differ substantially if multiple orientations are present (Fig. 2). At first, we test the cortical response to two bars separated by 60° with different intensities (Figs. 2a, b). If the recurrent synaptic weights are static and strong enough (Fig. 2a), then only one orientation is signaled. The cortical network selects the orientation 92 P Adorjim. L. Schwabe. C. Piepenbrock and K. Obermayer Feedforward Input Average Cortical Response f~1 (a) ~90 -45 0 45 90 I~I~--~_~~"~,------~ Orientation [deg] (b) , ' I .b~---~#"" 1~ .. .. ···INI I :> .. ,----, ... '.-'--<" ',,--,'" ;;20 A E .. ,-,,, , .... -'. a) 'i 110 c: Ul ''''' (c) - ~90 -45 0 45 90 ~ 0 , " --/', 90 g .: S 0 ~ -90 90 g . ., ~ 0 ·c o g -90 90 .~ 0 ~ -90 Activity Profile :;a •••• \~~~\$~* ~'> ~ \m0 '" s m·" : ~ . ~-... Orienretionldeg) r:S:;;1 (: ~~~ _ o '" ~90- -45 . 0 '. 45 '90 -900 (d) 150 300 Orientation [deg] Time [ms] Figure 2: The response of the cortical amplifier model with static (a,c) and fast depressing recurrent synapses (b, d). In both models the feed-forward synapses are fast depressing. In the left column the feed-forward input is shown, that is same for both models. Two types of stimuli were applied. The first stimulus consists of a stronger (a = -30°) and a weaker bar (a = +30°) (a, b); the second stimulus consists of three equal intensity bars with orientations that are separated by 60° (c, d). In the middle column the cortical response is shown averaged for different time windows ([0 .. 30] dotted; [0 .. 80] dashed; [200 .. 300] solid line). In the right column the cortical activity profile is plotted as a function of time. Gray values indicate the activity with bright denoting high activities. with the highest amplitude in a winner-take-all fashion. In contrast, if synaptic depression is present at the recurrent excitatory synapses, both bars are signaled in parallel (at low release probability, Fig. 2b) or after each other (high release probability, data not shown). First, those cells fire which are tuned to the orientation of the bar with the stronger intensity, and a sharply tuned response emerges at a single orientation-the network operates in a winner-take-all regime. The synapses of these highly active cells then become strongly depressed and cortical competition decreases. As the network is shifted to a more linear operation regime, the second orientation is signaled too. Note that this phenomenontogether with the observed contrast invariant tuning-cannot be reproduced by simply decreasing the static synaptic weights in the cortical amplifier model. The recurrent synaptic efficacy changes inhomogeneously in the network depending on the activity. Only the synapses of the highly active cells depress strongly, and therefore a sharply tuned response can be evoked by a bar with weak intensity. Fast synaptic depression thus behaves as a local self-regulation that modulates competition with a certain delay. This delay, and therefore the delay of the rise of the response to the second bar depends on the effective time constant reff(f(t),p) = rrec/(l + pf(t)rrec) of the synaptic depression at the recurrent connections. If the depression becomes faster due to an increase in the release probability p, then the delay decreases. The delay also scales with the difference between the bar intensities. The closer to each other they are, the shorter the delay will be. In Figs. 2c, d the cortical response to three bars with equal intensities is presented. Cells tuned to the presented three orientations respond in parallel if fast synaptic depression at the recurrent excitation is present (Figs. 2d). The cortical network with strong static recurrent synapses again fails to signal faithfully its feed-forward input. Additive noise on the Recurrent Cortical Competition: Strengthen or Weaken? 93 feed-forward input introduces a slight symmetry breaking and the network with static recurrent weights responds strongly at the orientation of only one of the presented bars (Fig. 2c). In summary, our simulations revealed that a recurrent network with fast synaptic depression is capable of obtaining robust sharpening of its feed-forward input and it also responds correctly to multiple orientations. Note that other local activity dependent adaptation mechanisms, such as slow potassium current, would have similar effects as the synaptic depression on the highly orientation specific excitatory connections. An experimentally testable prediction of our model is that the response to a flashed bar with lower contrast can be delayed by masking it with a second bar with higher contrast (Fig. 2b, right). We also suggest that long range integration from outside of the classical receptive field could emerge with a similar delay. In the initial phase of the cortical response, strong local features are amplified. In the longer, second phase, recurrent competition decreases and then weak modulatory recurrent or feed-forward input has a stronger relative effect. In the following, we investigate whether this strategy is favorable from the point of view of cortical encoding. 3 Dynamic coding In the previous section we have proposed that during cortical processing a highly nonlinear phase is followed by a more linear mode if we consider a short stimulus presentation or a fixation period. The simulations demonstrated that unless the recurrent competition is modulated in time, the network fails to account for more than one feature in its input. From a strictly functional point of view the question arises, why not to use weak recurrent competition during the whole processing period. We investigate this problem in an abstract signal-encoder framework i7 = g( i) + 1] , (4) where i is the input to the "cortical network", g(i) is a nonlinear mapping and-for the sake of simplicitY-1] is additive Gaussian noise. Naturally, in a real recurrent network output noise becomes input noise because of the feedback. Here we use the simplifying assumption that only output noise is present on the transformed input signal (input noise would lead to different predictions that should be further investigated). Output noise can be interpreted as a noisy channel that projects out from, e.g., the primary visual cortex. The nonlinear transformation g(i) here is considered as a functional description of a cortical amplifier network without analyzing how actually it is "implemented". Considering orientation selectivity, the signal i can be interpreted as a vector of intensities (or contrasts) of edges with different orientations. Edges which are not present have zero intensity. The coding capacity of a realistic neural network is limited. Among several other noise sources, this limitation could arise from imprecision in spike timing and a constraint on the maximal or average firi ng rate. The input-output mapping g( i) of a cortical amplifier network is approximated with the soft-max function (5) The f3 parameter can be interpreted as the level of recurrent competition. As f3 -+ 0 the network operates in a more linear mode, while f3 -+ 00 puts it into a highly nonlinear winner-take-all mode. In all cases the average activity in the network is constrained which has been suggested to minimize metabolic costs [5]. Let us consider a factorizing input distribution, 1 (_x a ) p( i) = Z IIi exp ---tfor x ~ 0 , (6) 94 P. Adorjfm, L. Schwabe, C. Piepenbrock and K. Obermayer 8rr===~~----~----~ 6 --- 0.5 0--0 1.0 'J:;-----Q.c!1 - - ,,-s-' 00 0.05 0.1 Noise (stdev) 0.15 Figure 3: The optimal competition parameter j3 as a function of the standard deviation of the Gaussian output noise 'f}. The optimal j3 is calculated for highly super-Gaussian, Gaussian, and sub-Gaussian stimulus densities. The sparsity parameter a is indicated in the legend. where the exponent a detennines the sparsity of the probability density function, Z is a nonnalizing constant, and ~ detennines the variance. If a = 2, the input density is the positive half of a multivariate Gaussian distribution. With a > 2 the signal distribution becomes sub-Gaussian, and with a < 2 it becomes super-Gaussian. For optimal processing in time one needs to gain the maximal infonnation about the signal for any increasing time window. Let us assume that the stimulus is static and it is presented for a limited time. As time goes ahead after stimulus onset, the time window for the encoding and the read-out mechanism increases. During a longer period more samples of the noisy network output are available, and thus the output noise level decreases with time. We suggest that the optimal competition parameter j3opt_at which the mutual infonnation between input i and output if (Eq. 4) is maximized-<iepends on the noise level. As the noise decreases with time, j3 or the recurrent cortical competition should also change during cortical processing. To demonstrate this idea, the mutual infonnation is calculated numerically for a three-dimensional state space. One might expect that at higher noise levels the highest infonnation transfer can be obtained if the typical and salient features are strongly amplified. Note that this is only true if the standard deviation of the noise scales sub-linearly with activity, which is true for an additive noise process as well as Poisson firing. As noise decreases (e.g., with increasing the time window for estimation), the level of competition should decrease distributing the available resources (e.g., spikes) among more units and letting the network respond to finer details at the input. Investigating the level of optimal competition j3 as a function of the standard deviation of the output noise (Fig. 3) this intuition is indeed justified. The optimal j3 scales with the standard deviation of the additive noise process. Comparing signal distributions with the same variance but with different sparsity exponents a, we find that the sparser the signal distribution is, the higher the optimal competition becomes, because multiple features are unlikely to be present at the same time if the input distribution is sparse. By enforcing competition, the optimal encoding strategy also generates an activity distribution where only few units fire for a presented stimulus. Since edges with different orientations fonn a sparse distributed representation of natural scenes [8], our work suggests that a strongly competitive visual cortical network could achieve a better performance on our visual environment than a simple linear network would do. We can now interpret our simulation results presented in the Section 2 from a functional point of view and give a prediction for the dynamics of the recurrent cortical competition. Noting that the output noise is decreasing with increasing time-window for encoding, the cortical competition should also decrease following a similar trajectory as presented in Fig. 3. If competition is low and static, then the cumulative mutual infonnation between input and output would converge only slowly towards the overall infonnation that is available in the stimulus. If the competition is high during the whole observation period, then after a fast rise the cumulative mutual information would saturate well below the possible Recurrent Cortical Competition: Strengthen or Weaken? 95 maximum. If the level of competition is dynamic, and it decreases from an initially highly competitive state, then the network obtains maximal information transfer in time. One may argue that the valuable information about the signals mainly depends on the interest of the observer. Considering an encoding system for one variable it has been suggested that in a highly attentive state the recurrent competition increases [10]. In the view of our results we would refine this statement by suggesting that competition increases or decreases depending on the level of visual detail the observer pays attention to. Whenever representation of small details is also required, reducing competition is the optimal strategy given enough bandwidth. In summary, using a detailed model for an orientation hypercolumn in VI we have demonstrated that sharp contrast invariant tuning and faithful representation of multiple features can be achieved by a recurrent network if the recurrent competition decreases in time after stimulus onset. The model predicts that the cortical response to weak details in the stimulus emerges with a delay if a second stronger feature is also present. The modulation from, e.g., outside of the classical receptive field also has a delayed effect on cortical activity. Our study within an abstract framework revealed that weakening the recurrent cortical competition on a fast time scale is functionally advantageous, because a maximal amount of information can be transmitted in any time window after stimulus onset. Acknowledgments Supported by the Boehringer Ingelheim Fonds (C. P.), by the German Science Foundation (DFG grant GK 120-2) and by Wellcome Trust 0500801ZJ97. References [1] L. F. Abbott, J. A. Varela, K. Sen, and S. B. Nelson. Synaptic depression and cortical gain control. Science, 275:220-224,1997. [2] P. Adoljan, J.B. Levitt, J.S. Lund, and K. Obennayer. A model for the intracortical origin of orientation preference and tuning in macaque striate cortex. Vis. Neurosci .• 16:303-318. 1999. [3] P. AdOljan. C. Piepenbrock. and K. Obennayer. Contrast adaptation and infomax in visual cortical neurons. Rev. Neurosci.. 10: 181-200, 1999. ftp://ftp.cs.tuberlin.de/pub/locaVnilpapers/adp99-contrast.ps.gz. [4] O. B. Artun, H. Z. Shouval, and L. N. Cooper. The effect of dynamic synapses on spatiotemporal receptive fields in visual cortex. Proc. Natl. Acad. Sci .• 95:11999-12003, 1998. [5] R. Baddeley. An efficient code in VI? Nature. 381 :560-561,1996. [6] R. Ben-Yishai, R. Lev Bar-Or. and H. Sompolinsky. Theory of orientation tuning in visual cortex. Proc. Natl. Acad. Sci .• 92:3844-3848,1995. [7] M. Carandini and D. L. Ringach. Predictions of a recurrent model of orientation selectivity. Vision Res., 37:3061-3071.1997. [8] D. J. Field. What is the goal of sensory coding. Neural Comput .• 6:559-601, 1994. [9] D. H. Hubel and T. N. Wiesel. Receptive fields. binocular interaction and functional architecture in cat's visual cortex. 1. Physiol., 165:559-568. 1962. [10] D. K. Lee. L. Itti, C. Kock. and 1. Braun. Attention activates winner-take-all competition among visual filters. Nat. Neurosci., 2:375-381,1999. [11] D. C. Somers, S. B. Nelson. and M. Sur. An emergent model of orientation selectiVity in cat visual cortical simple cells. 1. Neurosci., 15:5448-65, 1995. [12] H. Sompolinsky and R. Shapley. New perspectives on the mechanisms for orientation selectivity. Curro Op. in Neurobiol., 7:514-522, 1997. [13] T. W. Troyer, A. E. Krukowski. N. J. Priebe, and K. D. Miller. Contrast-invariant orientation tuning in visual cortex: Feedforward tuning and correlation-based intracortical connectivity. 1. Neurosci., 18:5908-5927. 1998. [14] M. V. Tsodyks and H. Markram. The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proc. Natl. Acad. Sci., 94:719-723. 1997.
|
1999
|
75
|
1,726
|
Constrained Hidden Markov Models Sam Roweis roweis@gatsby.ucl.ac.uk Gatsby Unit, University College London Abstract By thinking of each state in a hidden Markov model as corresponding to some spatial region of a fictitious topology space it is possible to naturally define neighbouring states as those which are connected in that space. The transition matrix can then be constrained to allow transitions only between neighbours; this means that all valid state sequences correspond to connected paths in the topology space. I show how such constrained HMMs can learn to discover underlying structure in complex sequences of high dimensional data, and apply them to the problem of recovering mouth movements from acoustics in continuous speech. 1 Latent variable models for structured sequence data Structured time-series are generated by systems whose underlying state variables change in a continuous way but whose state to output mappings are highly nonlinear, many to one and not smooth. Probabilistic unsupervised learning for such sequences requires models with two essential features: latent (hidden) variables and topology in those variables. Hidden Markov models (HMMs) can be thought of as dynamic generalizations of discrete state static data models such as Gaussian mixtures, or as discrete state versions of linear dynamical systems (LDSs) (which are themselves dynamic generalizations of continuous latent variable models such as factor analysis). While both HMMs and LDSs provide probabilistic latent variable models for time-series, both have important limitations. Traditional HMMs have a very powerful model of the relationship between the underlying state and the associated observations because each state stores a private distribution over the output variables. This means that any change in the hidden state can cause arbitrarily complex changes in the output distribution. However, it is extremely difficult to capture reasonable dynamics on the discrete latent variable because in principle any state is reachable from any other state at any time step and the next state depends only on the current state. LDSs, on the other hand, have an extremely impoverished representation of the outputs as a function of the latent variables since this transformation is restricted to be global and linear. But it is somewhat easier to capture state dynamics since the state is a multidimensional vector of continuous variables on which a matrix "flow" is acting; this enforces some continuity of the latent variables across time. Constrained hidden Markov models address the modeling of state dynamics by building some topology into the hidden state representation. The essential idea is to constrain the transition parameters of a conventional HMM so that the discretevalued hidden state evolves in a structured way.l In particular, below I consider parameter restrictions which constrain the state to evolve as a discretized version of a continuous multivariate variable, i.e. so that it inscribes only connected paths in some space. This lends a physical interpretation to the discrete state trajectories in an HMM. I A standard trick in traditional speech applications of HMMs is to use "left-to-right" transition matrices which are a special case of the type of constraints investigated in this paper. However, leftto-right (Bakis) HMMs force state trajectories that are inherently one-dimensional and uni-directional whereas here I also consider higher dimensional topology and free omni-directional motion. Constrained Hidden Markov Models 783 2 An illustrative game Consider playing the following game: divide a sheet of paper into several contiguous, nonoverlapping regions which between them cover it entirely. In each region inscribe a symbol, allowing symbols to be repeated in different regions. Place a pencil on the sheet and move it around, reading out (in order) the symbols in the regions through which it passes. Add some noise to the observation process so that some fraction of the time incorrect symbols are reported in the list instead of the correct ones. The game is to reconstruct the configuration of regions on the sheet from only such an ordered list(s) of noisy symbols. Of course, the absolute scale, rotation and reflection of the sheet can never be recovered, but learning the essential topology may be possible.2 Figure 1 illustrates this setup. 1, 11, 1, 11, .. . 24(V, 21, 2, .. . _ _ ..... 18, 19, 10,3, .. . ~ 8 2UJ 16, 16,.~ 15,15,2(]), ... True Generative Map iteration:030 logLikelihood:-1.9624 Figure 1: (left) True map which generates symbol sequences by random movement between connected cells. (centre) An example noisy output sequence with noisy symbols circled. (right) Learned map after training on 3 sequences (with 15% noise probability) each 200 symbols long. Each cell actually contains an entire distribution over all observed symbols, though in this case only the upper right cell has significant probability mass on more than one symbol (see figure 3 for display details). Without noise or repeated symbols, the game is easy (non-probabilistic methods can solve it) but in their presence it is not. One way of mitigating the noise problem is to do statistical averaging. For example, one could attempt to use the average separation in time of each pair of symbols to define a dissimilarity between them. It then would be possible to use methods like multi-dimensional scaling or a sort of Kohonen mapping though time3 to explicitly construct a configuration of points obeying those distance relations. However, such methods still cannot deal with many-to-one state to output mappings (repeated numbers in the sheet) because by their nature they assign a unique spatial location to each symbol. Playing this game is analogous to doing unsupervised learning on structured sequences. (The game can also be played with continuous outputs, although often high-dimensional data can be effectively clustered around a manageable number of prototypes; thus a vector time-series can be converted into a sequence of symbols.) Constrained HMMs incorporate latent variables with topology yet retain powerful nonlinear output mappings and can deal with the difficulties of noise and many-to-one mappings mentioned above; so they can "win" our game (see figs. 1 & 3). The key insight is that the game generates sequences exactly according to a hidden Markov process whose transition matrix allows only transitions between neighbouring cells and whose output distributions have most of their probability on a single symbol with a small amount on all other symbols to account for noise. 2The observed symbol sequence must be "informative enough" to reveal the map structure (this can be quantified using the idea of persistent excitation from control theory). 3Consider a network of units which compete to explain input data points. Each unit has a position in the output space as well as a position in a lower dimensional topology space. The winning unit has its position in output space updated towards the data point; but also the recent (in time) winners have their positions in topology space updated towards the topology space location of the current winner. Such a rule works well, and yields topological maps in which nearby units code for data that typically occur close together in time. However it cannot learn many-to-one maps in which more than one unit at different topology locations have the same (or very similar) outputs. 784 S. Roweis 3 Model definition: state topologies from cell packings Defining a constrained HMM involves identifying each state of the underlying (hidden) Markov chain with a spatial cell in a fictitious topology space. This requires selecting a dimensionality d for the topology space and choosing a packing (such as hexagonal or cubic) which fills the space. The number of cells in the packing is equal to the number of states M in the original Markov model. Cells are taken to be all of equal size and (since the scale of the topology space is completely arbitrary) of unit volume. Thus, the packing covers a volume M in topology space with a side length l of roughly l = MIld. The dimensionality and packing together define a vector-valued function x(m), m = 1 ... M which gives the location of cell m in the packing. (For example, a cubic packing of d dimensional space defines x(m+l) to be [m, mil, mll2, ... ,mild-I] mod l.) State m in the Markov model is assigned to to cell m in the packing, thus giving it a location x( m) in the topology space. Finally, we must choose a neighbourhood rule in the topology space which defines the neighbours of cell m; for example, all "connected" cells, all face neighbours, or all those within a certain radius. (For cubic packings, there are 3d -1 connected neighbours and 2d face neighbours in a d dimensional topology space.) The neighbourhood rule also defines the boundary conditions of the space - e.g. periodic boundary conditions would make cells on opposite extreme faces of the space neighbours with each other. The transition matrix of the HMM is now preprogrammed to only allow transitions between neighbours. All other transition probabilities are set to zero, making the transition matrix very sparse. (I have set all permitted transitions to be equally likely.) Now, all valid state sequences in the underlying Markov model represent connected ( "city block") paths through the topology space. Figure 2 illustrates this for a three-dimensional model. L L / / / / / / / / / / / / / / /.1,,< •. / ~V / • / .~ /V / · ... / L." !;.y / V · yV 641 4 State inference and learning Figure 2: (left) PhYSical depiction of the topology space for a constrained HMM with d=3,l=4 and M =64 showing an example state trajectory. (right) Corresponding transition matrix structure for the 64-state HMM computed using face-centred cubic packing. The gaps in the inner bands are due to edge effects. The constrained HMM has exactly the same inference procedures as a regular HMM: the forward-backward algorithm for computing state occupation probabilities and the Viterbi decoder for finding the single best state sequence. Once these discrete state inferences have been performed, they can be transformed using the state position function x( m) to yield probability distributions over the topology space (in the case offorward-backward) or paths through the topology space (in the case of Viterbi decoding). This transformation makes the outputs of state decodings in constrained HMMs comparable to the outputs of inference procedures for continuous state dynamical systems such as Kalman smoothing. The learning procedure for constrained HMMs is also almost identical to that for HMMs. In particular, the EM algorithm (Baum-Welch) is used to update model parameters. The crucial difference is that the transition probabilities which are precomputed by the topology and packing are never updated during learning. In fact, this makes learning much easier in some cases. Not only do the transition probabilities not have to be learned, but their structure constrains the hidden state sequences in such a way as to make the learning of the output parameters much more efficient when the underlying data really does come from a spatially structured generative model. Figure 3 shows an example of parameter learning for the game discussed above. Notice that in this case, each part of state space had only a single output (except for noise) so the final learned output distributions became essentially minimum entropy. But constrained HMMs can in principle model stochastic or multimodal output processes since each state stores an entire private distribution over outputs. Constrained Hidden Markov Models 785 1'"IIlloa-olO IDp.lkdihood.-2.1451 Figure 3: Snapshots of model parameters during constrained HMM learning for the game described in section 2. At every iteration each cell in the map has a complete distribution over all of the observed symbols. Only the top three symbols of each cell's histogram are show, with/ont size proportional to the square root o/probability (to make ink roughly proportional). The map was trained on 3 noisy sequences each 200 symbols long generated from the map on the left of figure 1 using 15% noise probability. The final map after convergence (30 iterations) is shown on the right of figure l. 5 Recovery of mouth movements from speech audio I have applied the constrained HMM approach described above to the problem of recovering mouth movements from the acoustic waveform in human speech. Data containing simultaneous audio and articulator movement information was obtained from the University of Wisconsin X-ray microbeam database [9]. Eight separate points (four on the tongue, one on each lip and two on the jaw) located in the midsaggital plane of the speaker's head were tracked while subjects read various words, sentences, paragraphs and lists of numbers. The x and y coordinates (to within about ± Imm) of each point were sampled at 146Hz by an Xray system which located gold beads attached to the feature points on the mouth, producing a 16-dimensional vector every 6.9ms. The audio was sampled at 22kHz with roughly 14 bits of amplitude resolution but in the presence of machine noise. These data are well suited to the constrained HMM architecture. They come from a system whose state variables are known, because of physical constraints, to move in connected paths in a low degree-of-freedom space. In other words the (normally hidden) articulators (movable structures of the mouth), whose positions represent the underlying state of the speech production system,4 move slowly and smoothly. The observed speech signal-the system's output--can be characterized by a sequence of short-time spectral feature vectors, often known as a spectrogram. In the experiments reported here, I have characterized the audio signal using 12 line spectral frequencies (LSFs) measured every 6.9ms (to coincide with the articulatory sampling rate) over a 25ms window. These LSF vectors characterize only the spectral shape of the speech waveform over a short time but not its energy. Average energy (also over a 25ms window every 6.9ms) was measured as a separate one dimensional signal. Unlike the movements of the articulators, the audio spectrum/energy can exhibit quite abrupt changes, indicating that the mapping between articulator positions and spectral shape is not smooth. Furthermore, the mapping is many to one: different articulator configurations can produce very similar spectra (see below). The unsupervised learning task, then, is to explain the complicated sequences of observed spectral features (LSFs) and energies as the outputs of a system with a low-dimensional state vector that changes slowly and smoothly. In other words, can we learn the parameters5 of a constrained HMM such that connected paths through the topology space (state space) generate the acoustic training data with high likelihood? Once this unsupervised learning task has been performed, we can (as I show below) relate the learned trajectories in the topology space to the true (measured) articulator movements. 4 Articulator positions do not provide complete state information. For example, the excitation signal (voiced or unvoiced) is not captured by the bead locations. They do, however, provide much important information; other state information is easily accessible directly from acoustics. 5Model structure (dimensionality and number of states) is currently set using cross validation. 786 S. Roweis While many models of the speech production process predict the many-to-one and nonsmooth properties of the articulatory to acoustic mapping, it is useful to confirm these features by looking at real data. Figure 4 shows the experimentally observed distribution of articulator configurations used to produce similar sounds. It was computed as follows. All the acoustic and articulatory data for a single speaker are collected together. Starting with some sample called the key sample, I find the 1000 samples "nearest" to this key by two measures: articulatory distance, defined using the Mahalanobis norm between two position vectors under the global covariance of all positions for the appropriate speaker, and spectral shape distance, again defined using the Mahalanobis norm but now between two line spectral frequency vectors using the global LSF covariance of the speaker's audio data. In other words, I find the 1000 samples that "look most like" the key sample in mouth shape and that "sound most like" the key sample in spectral shape. I then plot the tongue bead positions of the key sample (as a thick cross), and the 1000 nearest samples by mouth shape (as a thick ellipse) and spectral shape (as dots). The points of primary interest are the dots; they show the distribution of tongue positions used to generate very similar sounds. (The thick ellipses are shown only as a control to ensure that many nearby points to the key sample do exist in the dataset.) Spread or multimodality in the dots indicates that many different articulatory configurations are used to generate the same sound. :Ill 30 :Ill " l 30 ;~~.; .:~~~ . .\ . ~ :Ill :.fI:<tl/ :Ill I 10 10 " " ~ . ' :. f·i1~W · I 10 I ;~~t I 10 1-: .~~( ., 0 '~ . ., ., 0 ., . , .:. .;.: 0 0 -~ -30 -:Ill -10 0 -~ -so -40 -30 -:Ill -10 -40 -30 -:Ill -10 -~ -so -40 -30 -:Ill tooguelip>(mmJ toogue bodyl > (mmJ toque lip:r. (mmJ .."... bodyl> (mmJ 20 :Ill 20 10 ~~¥f 10 I 10 I .J!! I 10 I 0 0 'lMiJf ., 0 ., ., 0 ., -10 -10 -~ -so -40 -30 -~o -60 -so -40 -~ -so -40 -30 -~o -60 -so -40 _ body2 > (nmJ _ -> (om» tongue body2 > (mmJ __ > (mmJ Figure 4: Inverse mapping from acoustics to articulation is ill-posed in real speech production data. Each group of four articulator-space plots shows the 1000 samples which are "nearest" to one key sample (thick cross). The dots are the 1000 nearest samples using an acoustic measure based on line spectral frequencies. Spread or multimodality in the dots indicates that many different articulatory configurations are used to generate very similar sounds. Only the positions of the four tongue beads have been plotted. 1\vo examples (with different key samples) are shown, one in the left group of four panels and another in the right group. The thick ellipses (shown as a control) are the two-standard deviation contour of the 1000 nearest samples using an articulatory position distance metric. Why not do direct supervised learning from short-time spectral features (LSFs) to the articulator positions? The ill-posed nature of the inverse problem as shown in figure 4 makes this impossible. To illustrate this difficulty, I have attempted to recover the articulator positions from the acoustic feature vectors using Kalman smoothing on a LDS. In this case, since we have access to both the hidden states (articulator positions) and the system outputs (LSFs) we can compute the optimal parameters of the model directly. (In particular, the state transition matrix is obtained by regression from articulator positions and velocities at time t onto positions at time t + 1; the output matrix by regression from articulator positions and velocities onto LSF vectors; and the noise covariances from the residuals of these regressions.) Figure 5b shows the results of such smoothing; the recovery is quite poor. Constrained HMMs can be applied to this recovery problem, as previously reported [6]. (My earlier results used a small subset of the same database that was not continuous speech and did not provide the hard experimental verification (fig. 4) of the many-to-one problem.) Constrained Hidden Markov Models 787 Figure 5: (A) Recovered articulator movements using state inference on a constrained HMM. A four-dimensional model with 4096 states was trained on data (all beads) from a single speaker but not including the test utterance shown. Dots show the actual measured articulator movements for a single bead coordinate versus time; the thin lines are estimated movements from the corresponding acoustics. (B) Unsuccessful recovery of articulator movements using Kalman smoothing on a global LDS model. All the (speaker-dependent) parameters of the underlying linear dynamical system are known; they have been set to their optimal values using the true movement information from the training data. Furthermore, for this example, the test utterance shown was included in the training data used to estimate model parameters. (C) All 16 bead coordinates; all vertical axes are the same scale. Bead names are shown on the left. Horizontal movements are plotted in the left-hand column and vertical movements in the right-hand column, The separation between the two horizontal lines near the centre of the right panel indicates the machine measurement error. I 20 § '::l '[ 0 '0 Recovery of tongue tip vertical motion from acoustics 2 345 time [sec] 6 7 8 Kalman smoothing on optimal linear dynamical system ~-10 B -20L-~--~--~--~--~--~--~~ 02345 time [sec] 6 7 8 The basic idea is to train (unsupervised) on sequences of acoustic-spectral features and then map the topology space state trajectories onto the measured articulatory movements. Figure 5 shows movement recovery using state inference in a four-dimensional model with 4096 states (d=4,£=8,M =4096) trained on data (all beads) from a single speaker. (Naive unsupervised learning runs into severe local minima problems. To avoid these, in the simulations shown above, models were trained by slowly annealing two learning parameters6: a term f.!3 was used in place of the zeros in the sparse transition matrix, and If was used in place of It = p(mtlobservations) during inference of state occupation probabilities. Inverse temperature (3 was raised from 0 to 1.) To infer a continuous state trajectory from an utterance after learning, I first do Viterbi decoding on the acoustics to generate a discrete state sequence mt and then interpolate smoothly between the positions x(mt) of each state. 6 An easier way (which I have used previously) to find good minima is to initialize the models using the articulatory data themselves. This does not provide as impressive "structure discovery" as annealing but still yields a system capable of inverting acoustics into articulatory movements on previously unseen test data. First, a constrained HMM is trained onjust the articulatory movements; this works easily because of the natural geometric (physical) constraints. Next, I take the distribution of acoustic features (LSFs) over all times (in the training data) when Viterbi decoding places the model in a particular state and use those LSF distributions to initialize an equivalent acoustic constrained HMM. This new model is then retrained until convergence using Baum-Welch. 788 S. Roweis After unsupervised learning, a single linear fit is performed between these continuous state trajectories and actual articulator movements on the training data. (The model cannot discover the units system or axes used to represent the articulatory data.) To recover articulator movements from a previously unseen test utterance, I infer a continuous state trajectory as above and then apply the single linear mapping (learned only once from the training data). 6 Conclusions, extensions and other work By enforcing a simple constraint on the transition parameters of a standard HMM, a link can be forged between discrete state dynamics and the motion of a real-valued state vector in a continuous space. For complex time-series generated by systems whose underlying latent variables do in fact change slowly and smoothly, such constrained HMMs provide a powerful unsupervised learning paradigm. They can model state to output mappings that are highly nonlinear, many to one and not smooth. Furthermore, they rely only on well understood learning and inference procedures that come with convergence guarantees. Results on synthetic and real data show that these models can successfully capture the lowdimensional structure present in complex vector time-series. In particular, I have shown that a speaker dependent constrained HMM can accurately recover articulator movements from continuous speech to within the measurement error of the data. This acoustic to articulatory inversion problem has a long history in speech processing (see e.g. [7] and references therein). Many previous approaches have attempted to exploit the smoothness of articulatory movements for inversion or modeling: Hogden et.al (e.g. [4]) provided early inspiration for my ideas, but do not address the many-to-one problem; Simon Blackburn [1] has investigated a forward mapping from articulation to acoustics but does not explicitly attempt inversion; early work at Waterloo [5] suggested similar constraints for improving speech recognition systems but did look at real articulatory data, more recent work at Rutgers [2] developed a very similar system much further with good success. Perpinan [3], considers a related problem in sequence learning using EPG speech data as an example. While in this note I have described only "diffusion" type dynamics (transitions to all neighbours are equally likely) it is also possible to consider directed flows which give certain neighbours of a state lower (or zero) probability. The left-to-right HMMs mentioned earlier are an example of this for one-dimensional topologies. For higher dimensions, flows can be derived from discretization of matrix (linear) dynamics or from other physical/structural constraints. It is also possible to have many connected local flow regimes (either diffusive or directed) rather than one global regime as discussed above; this gives rise to mixtures of constrained HMMs which have block-structured rather than banded transition matrices. Smyth [8] has considered such models in the case of one-dimensional topologies and directed flows; I have applied these to learning character sequences from English text. Another application I have investigated is map learning from mUltiple sensor readings. An explorer (robot) navigates in an unknown environment and records at each time many local measurements such as altitude, pressure, temperature, humidity, etc. We wish to reconstruct from only these sequences of readings the topographic maps (in each sensor variable) of the area as well as the trajectory of the explorer. A final application is tracking (inferring movements) of articulated bodies using video measurements of feature positions. References [1] S. Blackburn & S. Young. ICSLP 1996, Philadephia, v.2 pp.969-972 [2] S. Chennoukh et.al, Eurospeech 1997, Rhodes, Greece, v.l pp.429-432 [3] M. Carreira-Perpinan. NIPS'12, 2000. (This volume.) [4] D. Nix & 1. Hogden. NIPS'lI, 1999, pp.744-750 [5] G. Ramsay & L. Deng. 1. Acoustical Society of America, 95(5), 1994, p.2873 [6] S. Roweis & A. Alwan. Eurospeech 1997, Rhodes, Greece, v.3 pp.1227-1230 [7] 1. Schroeter & M. Sondhi. IEEE Trans.Speech & Audio Processing, 2(1 p2), 1994, pp.133-150 [8] P. Smyth. NIPS'9, 1997, pp.648-654 [9] J. Westbury. X-ray microbeam speech production database user's handbook version J.O. University of Wisconsin, Madison, June 1994.
|
1999
|
76
|
1,727
|
Approximate inference algorithms for two-layer Bayesian networks AndrewY. Ng Computer Science Division UC Berkeley Berkeley, CA 94720 ang@cs.berkeley.edu Michael I. Jordan Computer Science Division and Department of Statistics UC Berkeley Berkeley, CA 94720 jordan@cs.berkeley.edu Abstract We present a class of approximate inference algorithms for graphical models of the QMR-DT type. We give convergence rates for these algorithms and for the Jaakkola and Jordan (1999) algorithm, and verify these theoretical predictions empirically. We also present empirical results on the difficult QMR-DT network problem, obtaining performance of the new algorithms roughly comparable to the Jaakkola and Jordan algorithm. 1 Introduction The graphical models formalism provides an appealing framework for the design and analysis of network-based learning and inference systems. The formalism endows graphs with a joint probability distribution and interprets most queries of interest as marginal or conditional probabilities under this joint. For a fixed model one is generally interested in the conditional probability of an output given an input (for prediction), or an input conditional on the output (for diagnosis or control). During learning the focus is usually on the likelihood (a marginal probability), on the conditional probability of unobserved nodes given observed nodes (e.g., for an EM or gradient-based algorithm), or on the conditional probability of the parameters given the observed data (in a Bayesian setting). In all of these cases the key computational operation is that of marginalization. There are several methods available for computing marginal probabilities in graphical models, most of which involve some form of message-passing on the graph. Exact methods, while viable in many interesting cases (involving sparse graphs), are infeasible in the dense graphs that we consider in the current paper. A number of approximation methods have evolved to treat such cases; these include search-based methods, loopy propagation, stochastic sampling, and variational methods. Variational methods, the focus of the current paper, have been applied successfully to a number of large-scale inference problems. In particular, Jaakkola and Jordan (1999) developed a variational inference method for the QMR-DT network, a benchmark network involving over 4,000 nodes (see below). The variational method provided accurate approximation to posterior probabilities within a second of computer time. For this difficult 534 A. Y. Ng and M. 1. Jordan inference problem exact methods are entirely infeasible (see below), loopy propagation does not converge to correct posteriors (Murphy, Weiss, & Jordan, 1999), and stochastic sampling methods are slow and unreliable (Jaakkola & Jordan, 1999). A significant step forward in the understanding of variational inference was made by Kearns and Saul (1998), who used large deviation techniques to analyze the convergence rate of a simplified variational inference algorithm. Imposing conditions on the magnitude of the weights in the network, they established a 0 ( Jlog N / N) rate of convergence for the error of their algorithm, where N is the fan-in. In the current paper we utilize techniques similar to those of Kearns and Saul to derive a new set of variational inference algorithms with rates that are faster than 0 ( Jlog N / N). Our techniques also allow us to analyze the convergence rate of the Jaakkola and Jordan (1999) algorithm. We test these algorithms on an idealized problem and verify that our analysis correctly predicts their rates of convergence. We then apply these algorithms to the difficult the QMR-DT network problem. 2 Background 2.1 The QMR-DT network The QMR-DT (Quick Medical Reference, Decision-Theoretic) network is a bipartite graph with approximately 600 top-level nodes di representing diseases and approximately 4000 lower-level nodes Ii representing findings (observed symptoms). All nodes are binaryvalued. Each disease is given a prior probability P(di = 1), obtained from archival data, and each finding is parameterized as a "noisy-OR" model: P(h = lid) = 1- e-(lio-L:jE"i (lijd j , where 7T'i is the set of parent diseases for finding h and where the parameters Oij are obtained from assessments by medical experts (see Shwe, et aI., 1991). Letting Zi = OiQ + I:jE 1l'i Oijdj , we have the following expression for the likelihood I : (1) where the sum is a sum across the approximately 2600 configurations of the diseases. Note that the second product, a product over the negative findings, factorizes across the diseases dj ; these factors can be absorbed into the priors P (dj ) and have no significant effect on the complexity of inference. It is the positive findings which couple the diseases and prevent the sum from being distributed across the product. Generic exact algorithms such as the junction tree algorithm scale exponentially in the size of the maximal clique in a moralized, triangulated graph. Jaakkola and Jordan (1999) found cliques of more than 150 nodes in QMR-DT; this rules out the junction tree algorithm. Heckerman (1989) discovered a factorization specific to QMR-DT that reduces the complexity substantially; however the resulting algorithm still scales exponentially in the number of positive findings and is only feasible for a small subset of the benchmark cases. I In this expression, the factors P( dj) are the probabilities associated with the (parent-less) disease nodes, the factors (1 - e - Zi) are the probabilities of the (child) finding nodes that are observed to be in their positive state, and the factors e -Zi are the probabilities of the negative findings. The resulting product is the joint probability P(f, d), which is marginalized to obtain the likelihood P(f). Approximate Inference Algorithms for Two-Layer Bayesian Networks 535 2.2 The Jaakkola and Jordan (JJ) algorithm Jaakkola and Jordan (1999) proposed a variational algorithm for approximate inference in the QMR-DT setting. Briefly, their approach is to make use of the following variational inequality: where Ci is a deterministic function of Ai. This inequality holds for arbitrary values of the free "variational parameter" Ai. Substituting these variational upper bounds for the probabilities of positive findings in Eq. (1), one obtains a factorizable upper bound on the likelihood. Because of the factorizability, the sum across diseases can be distributed across the joint probability, yielding a product of sums rather than a sum of products. One then minimizes the resulting expression with respect to the variational parameters to obtain the tightest possible variational bound. 2.3 The Kearns and Saul (KS) algorithm A simplified variational algorithm was proposed by Kearns and Saul (1998), whose main goal was the theoretical analysis of the rates of convergence for variational algorithms. In their approach, the local conditional probability for the finding Ii is approximated by its value at a point a small distance Ci above or below (depending on whether upper or lower bounds are desired) the mean input E[Zi]. This yields a variational algorithm in which the values Ci are the variational parameters to be optimized. Under the assumption that the weights Oij are bounded in magnitude by T / N, where T is a constant and N is the number of parent ("disease") nodes, Kearns and Saul showed that the error in likelihood for their algorithm converges at a rate of O( Vlog N / N). 3 Algorithms based on local expansions Inspired by Kearns and Saul (1998), we describe the design of approximation algorithms for QMR-DT obtained by expansions around the mean input to the finding nodes. Rather than using point approximations as in the Kearns-Saul (KS) algorithm, we make use of Taylor expansions. (See also Plefka (1982), and Barber and van de Laar (1999) for other perturbational techniques.) Consider a generalized QMR-DT architecture in which the noisy-OR model is replaced by a general function 'IjJ( z) : R -t [0, 1] having uniformly bounded derivatives, i.e., \'IjJ(i) (z) \ :::; Bi . Define F(Zl, . .. , ZK) = rr~l ('IjJ(zi))fi rr~l (1 - 'IjJ(Zd)l-fi so that the likelihood can be written as P(f) = E{z;}[F(Zl"" ,ZK)]. (2) Also define /-ti = E[Zi] = Ow + 2:7=1 OijP(dj = 1). A simple mean-field-like approximation can be obtained by evaluating F at the mean values P(f) ~ F(/-t1, ... ,/-tK). (3) We refer to this approximation as "MF(O)." Expanding the function F to second order, and defining (i = Zi - /-ti, we have: r K 1 K K P(f) E{fi} F(jl) + L Fi1 (J1)(i 1 + 21 .L L Fid2 (J1)Eh (i2 + L 11 =1 11 =112=1 (4) 536 A. Y. Ng and M I. Jordan where the subscripts on F represent derivatives. Dropping the remainder term and bringing the expectation inside, we have the "MF(2)" approximation: 1 K K P(f) ~ F(il) + 2 L L Fili2(Jt)E[EilEi2] i 1 =1 i2=1 More generally, we obtain a "MF(i)" approximation by carrying out a Taylor expansion to i-th order. 3.1 Analysis In this section, we give two theorems establishing convergence rates for the MF( i) family of algorithms and for the Jaakkola and Jordan algorithm. As in Kearns and Saul (1998), our results are obtained under the assumption that the weights are of magnitude at most O(lIN) (recall that N is the number of disease nodes). For large N, this assumption of "weak interactions" implies that each Zi will be close to its mean value with high probability (by the law of large numbers), and thereby gives justification to the use of local expansions for the probabilities of the findings. Due to space constraints, the detailed proofs of the theorems given in this section are deferred to the long version of this paper, and we will instead only sketch the intuitions for the proofs here. Theorem 1 Let K (the number offindings) be fixed, and suppose IfJij I :::; ~ for all i, j for some fixed constantT. Then the absolute error of the MF(k) approximation is 0 (N(!:!1)/2) for k odd and 0 (N(k72+1) ) for k even. Proof intuition. First consider the case of odd k. Since IfJij I :::; ~, the quantity Ei = Zi J-Li = 2:j fJij (dj - E[ dj]) is like an average of N random variables, and hence has standard deviation on the order 11m. Since MF(k) matches F up to the k-th order derivatives, we find that when we take a Taylor expansion ofMF(k)'s error, the leading non-zero term is the k + 1-st order term, which contains quantities such as 10:+1. Now because Ei has standard deviation on the order 11m, it is unsurprising that E[E:+l] is on the order 1IN(k+l)/2, which gives the error of MF(k) for odd k. For k even, the leading non-zero term in the Taylor expansion of the error is a k + 1-st order term with quantities such as 10:+1. But if we think of Ei as converging (via a central limit theorem effect) to a symmetric distribution, then since symmetric distributions have small odd central moments, E[ 10:+1] would be small. This means that for k even, we may look to the order k + 2 term for the error, which leads to MF(k) having the the same big-O error as MF(k + 1). Note this is also consistent with how MF(O) and MF(l) always give the same estimates and hence have the same absolute error. 0 A theorem may also be proved for the convergence rate of the Jaakkola and Jordan (JJ) algorithm. For simplicity, we state it here only for noisy-OR networks. 2 A closely related result also holds for sigmoid networks with suitably modified assumptions; see the full paper. Theorem 2 Let K befixed, and suppose 'Ij;(z) = 1-e-z is the noisy-ORfunction. Suppose further that 0 :::; fJij :::; ~ for all i, j for some fixed constant T, and that J-Li ~ J-Lmin for all i, for some fixed J-Lmin > O. Then the absolute error of the JJ approximation is 0 (~ ). 2Note in any case that 11 can be applied only when 1/J is log-concave, such as in noisy-OR networks (where incidentally all weights are non-negative). ... e Approximate Inference Algorithms for Two-Layer Bayesian Networks 537 The condition of some Pmin lowerbounding the Pi'S ensures that the findings are not too unlikely; for it to hold, it is sufficient that there be bias ("leak") nodes in the network with weights bounded away from zero. Proof intuition. Neglecting negative findings, (which as discussed do not need to be handled variationally,) this result is proved for a "simplified" version of the JJ algorithm, that always chooses the variational parameters so that for each i, the exponential upperbound on '1/J(Zi) is tangent to '1/J at Zi = Pi. (The "normal" version of JJ can have error no worse than this simplified one.) Taking a Tay lor expansion again of the approximation's error, we find that since the upperbound has matched zeroth and first derivatives with F, the error is a second order term with quantities such as f.t. As discussed in the MF(k) proof outline, this quantity has expectation on the order 1jN, and hence JJ's error is O(ljN). 0 To summarize our results in the most useful cases, we find that MF(O) has a convergence rate of O(ljN), both MF(2) and MF(3) have rates of O(ljN2), and JJ has a convergence rate of O(ljN). 4 Simulation results 4.1 Artificial networks We carried out a set of simulations that were intended to verify the theoretical results presented in the previous section. We used bipartite noisy-OR networks, with full connectivity between layers and with the weights ()ij chosen uniformly in (0, 2jN). The number N of top-level ("disease") nodes ranged from 10 to 1000. Priors on the disease nodes were chosen uniformly in (0,1). The results are shown in Figure 1 for one and five positive findings (similar results where obtained for additional positive findings). 100r--_ ____ ~------___. -... _--_._---... --..... _--... -. 10·' --- - - - - __ _ 10·' ....•....................................... w 10' th 1t 10 100 #diseases 1000 10 100 #diseases 1000 Figure 1: Absolute error in likelihood (averaged over many randomly generated networks) as a function of the number of disease nodes for various algorithms. The short-dashed lines are the KS upper and lower bounds (these curves overlap in the left panel), the long-dashed line is the 11 algorithm and the solid lines are MF(O), MF(2) and MF(3) (the latter two curves overlap in the right panel). The results are entirely consistent with the theoretical analysis, showing nearly exactly the expected slopes of -112, -1 and -2 on a loglog plot. 3 Moreover, the asymptotic results are 3The anomalous behavior of the KS lower bound in the second panel is due to the fact that the algorithm generally finds a vacuous lower bound of 0 in this case, which yields an error which is essentially constant as a function of the number of diseases. 538 A. Y. Ng and M. 1. Jordan also predictive of overall performance: the MF(2) and MF(3) algorithms perform best in all cases, MF(O) and JJ are roughly equivalent, and KS is the least accurate. 4.2 QMR-DT network We now present results for the QMR-DT network, in particular for the four benchmark CPC cases studied by Jaakkola and Jordan (1999). These cases all have fewer than 20 positive findings; thus it is possible to run the Heckerman (1989) "Quickscore" algorithm to obtain the true likelihood. Case 16 Case 32 10'''' '0' 10·:10 '0· , , , '0·' , , 10'r. '0 8'°· , , 8 , , ~t O·30 .c Qj ~to'" ~ 1 0~ 10" · '0to'" :---'010 •• 1 ° 3 . , , 7 0 3 • 5 #Exactly treated findings #Exactly treated findings Figure 2: Results for CPC cases 16 and 32, for different numbers of exactly treated findings. The horizontal line is the true likelihood, the dashed line is J1's estimate, and the lower solid line is MF(3)'s estimate. Case 34 lO" ~-~--...___---,.---...,.--~------, lO'"F-----_____ -=",...-=======j ,0 ·o~-----!---~. --7', --'-7'---:':lO:------!12 #Exacdy treated findings Case 46 lO'·.--~--...___---,.---...,.--~--__, ----------10·"'F-----------------:;;",.-""==;J 1O~ to ... o~-____!_--~. --7'.---!-'---::-----!12' #Exaclly treated findings Figure 3: Results for CPC cases 34 and 46. Same legend as above. In Jaakkola and Jordan (1999), a hybrid methodology was proposed in which only a portion of the findings were treated approximately; exact methods were used to treat the remaining findings. Using this hybrid methodology, Figures 2 and 3 show the results of running JJ and MF(3) on these four cases.4 4These experiments were run using a version of the 11 algorithm that optimizes the variational parameters just once without any findings treated exactly, and then uses these fixed values of the parameters thereafter. The order in which findings are chosen to be treated exactly is based on 11's estimates, as described in Jaakkola and Jordan (1999). Missing points in the graphs for cases 16 and Approximate Inference Algorithms for Two-Layer Bayesian Networks 539 The results show the MF algorithm yielding results that are comparable with the JJ algorithm. 5 Conclusions and extension to multilayer networks This paper has presented a class of approximate inference algorithms for graphical models of the QMR-DT type, supplied a theoretical analysis of convergence rates, verified the rates empirically, and presented promising empirical results for the difficult QMR-DT problem. Although the focus of this paper has been two-layer networks, the MF(k) family of algorithms can also be extended to multilayer networks. For example, consider a 3-layer network with nodes bi being parents of nodes di being parents of nodes Ii. To approximate Pr[J] using (say) MF(2), we first write Pr[J] as an expectation of a function (F) of the Zi'S, and approximate this function via a second-order Taylor expansion. To calculate the expectation of the Taylor approximation, we need to calculate terms in the expansion such as E[di], E[didj ] and E[dn When di had no parents, these quantities were easily derived in terms of the disease prior probabilities. Now, they instead depend on the joint distribution of di and dj , which we use our two-layer version of MF(k), applied to the first two (bi and di ) layers of the network, to approximate. It is important future work to carefully study the performance of this algorithm in the multilayer setting. Acknowledgments We wish to acknowledge the helpful advice of Tommi Jaakkola, Michael Kearns, Kevin Murphy, and Larry Saul. References [1] Barber, D., & van de Laar, P. (1999) Variational cumulant expansions for intractable distributions. Journal of Artificial Intelligence Research, 10,435-455. [2] Heckerman, D. (1989). A tractable inference algorithm for diagnosing multiple diseases. In Proceedings of the Fifth Conference on Uncertainty in Artificial Intelligence. [3] Jaakkola, T. S., & Jordan, M. 1. (1999). Variational probabilistic inference and the QMR-DT network. Journal of Artificial Intelligence Research, 10,291-322. [4] Jordan, M. 1., Ghahramani, Z., Jaakkola, T. S., & Saul, L. K. (1998). An introduction to variational methods for graphical models. In Learning in Graphical Models. Cambridge: MIT Press. [5] Kearns, M. 1., & Saul, L. K. (1998). Large deviation methods for approximate probabilistic inference, with rates of convergence. In G. F. Cooper & S. Moral (Eds.), Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence. San Mateo, CA: Morgan Kaufmann. [6] Murphy, K. P., Weiss, Y, & Jordan, M. 1. (1999). Loopy belief propagation for approximate inference: An empirical study. In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence. [7] Plefka, T. (1982). Convergence condition of the TAP equation for the infinite-ranged Ising spin glass model. In 1. Phys. A: Math. Gen., 15(6). [8] Shwe, M., Middleton, B., Heckerman, D., Henrion, M., Horvitz, E., Lehmann, H., & Cooper, G. (1991). Probabilistic diagnosis using a reformulation of the INTERNIST-1/QMR knowledge base 1. The probabilistic model and inference algorithms. Methods of Information in Medicine, 30, 241-255. 34 correspond to runs where our implementation of the Quickscore algorithm encountered numerical problems.
|
1999
|
77
|
1,728
|
Perceptual Organization Based on Temporal Dynamics Xiuwen Liu and DeLiang L. Wang Department of Computer and Information Science Center for Cognitive Science The Ohio State University, Columbus, OR 43210-1277 Email: {liux, dwang}@cis.ohio-state.edu Abstract A figure-ground segregation network is proposed based on a novel boundary pair representation. Nodes in the network are boundary segments obtained through local grouping. Each node is excitatorily coupled with the neighboring nodes that belong to the same region, and inhibitorily coupled with the corresponding paired node. Gestalt grouping rules are incorporated by modulating connections. The status of a node represents its probability being figural and is updated according to a differential equation. The system solves the figure-ground segregation problem through temporal evolution. Different perceptual phenomena, such as modal and amodal completion, virtual contours, grouping and shape decomposition are then explained through local diffusion. The system eliminates combinatorial optimization and accounts for many psychophysical results with a fixed set of parameters. 1 Introduction Perceptual organization refers to the ability of grouping similar features in sensory data. This, at a minimum, includes the operations of grouping and figure-ground segregation, which refers to the process of determining relative depths of adjacent regions in input data and thus proper occlusion hierarchy. Perceptual organization has been studied extensively and many of the existing approaches [5] [4] [8] [10] [3] start from detecting discontinuities, i.e. edges in the input; one or several configurations are then selected according to certain criteria, for example, non-accidental ness [5]. Those approaches have several disadvantages for perceptual organization. Edges should be localized between regions and an additional ambiguity, the ownership of a boundary segment, is introduced, which is equivalent to figure-ground segregation [7]. Due to that, regional attributions cannot be associated with boundary segments. Furthermore, because each boundary segment can belong to different regions, the potential search space is combinatorial. To overcome some of the problems, we propose a laterally-coupled network based on a boundary-pair representation to resolve figure-ground segregation. An occluding boundary is represented by a pair of boundaries of the two associated regions, and Perceptual Organization Based on Temporal Dynamics 39 (a) Figure 1: On- and off-center cell responses. (a) On- and off-center cells. (b) Input image. (c) On-center cell responses. (d) Off-center cell responses (e) Binarized on- and off-center cell responses, where white regions represent on-center response regions and black off-center regions. initiates a competition between the regions. Each node in the network represents a boundary segment. Regions compete to be figural through boundary-pair competition and figure-ground segregation is resolved through temporal evolution. Gestalt grouping rules are incorporated by modulating coupling strengths between different nodes within a region, which influences the temporal dynamics and determines the percept of the system. Shape decomposition and grouping are then implemented through local diffusion using the results from figure-ground segregation. 2 Figure-Ground Segregation Network The central problem in perceptual organization is to determine relative depths among regions. As figure reversal occurs in certain circumstances, figure-ground segregation cannot be resolved only based on local attributes. 2.1 The Network Architecture The boundary-pair representation is motivated by on- and off-center cells, shown in Fig. l(a). Fig. l(b) shows an input image and Fig. l(c) and (d) show the on- and off-center responses. Without zero-crossing, we naturally obtain double responses for each occluding boundary, as shown in Fig. l(e). In our boundary-pair representation, each boundary is uniquely associated with a region. In this paper, we obtain closed region boundaries from segmentation and form boundary segments using corners and junctions, which are detected through local corner and junction detectors. A node i in the figure-ground segregation network represents a boundary segment, and Pi represents its probability being figural, which is set to 0.5 initially. Each node is laterally coupled with neighboring nodes on the closed boundary. The connection weight from node i to j, Wij, is 1 and can be modified by T-junctions and local shape information. Each occluding boundary is represented by a pair of boundary segments of the involved regions. For example, in Fig. 2(a), nodes 1 and 5 form a boundary pair, where node 1 belongs to the white region and node 5 belongs to the black region. Node i updates its status by: dPi " " Bi T dt = ILL ~ Wki(Pk Pi) + ILJ(l P i) ~ H(Qli) + ILB(l - Pi) exp( - K ) (1) kEN(i) lEJ (i) B Here N(i) is the set of neighboring nodes of i, and ILL , ILJ , and ILB are parameters to determine the influences from lateral connections, junctions, and bias. J(i) is 40 2 ----®--<@r--6 3 (a) 8 ---oo{[D--@---4 X Liu and D. L. Wang I (b) Figure 2: (a) The figure-ground segregation network for Fig. l(b). Nodes 1, 2, 3 and 4 belong to the white region; nodes 5, 6, 7, and 8 belong to the black region; and nodes 9 and 10, and nodes 11 and 12 belong to the left and right gray regions respectively. Solid lines represent excitatory coupling while dashed lines represent inhibitory connections. (b) Result after surface completion. Left and right gray regions are grouped together. the set of junctions that are associated with i and Q/i is the junction strength of node i of junction l. H(x) is given by H(x) = tanh(j3(x - OJ )), where j3 controls the steepness and OJ is a threshold. In (1), the first term on the right reflects the lateral influences. When nodes are strongly coupled, they are more likely to be in the same status, either figure or background. The second term incorporates junction information. In other words, at a T-junction, segments that vary more smoothly are more likely to be figural. The third term is a bias, where Bi is the bias introduced to simulate human perception. The competition between paired nodes i and j is through normalization based on the assumption that only one of the paired nodes should be figural at a given time: p(Hl) = pt/(P~ + pt) and p(tH) = P~/(pt + P~) t t t J J J t J' 2.2 Incorporation of Gestalt Rules To generate behavior that is consistent with human perception, we incorporate grouping cues and some Gestalt grouping principles. As the network provides a generic model, additional grouping rules can also be incorporated. T-junctions T-junctions provide important cues for determining relative depths [7] [10]. In Williams and Hanson's model [10], T-junctions are imposed as topological constraints. Given aT-junction l, the initial strength for node i that is associated with lis: Q exp( -Ci(i,C(i»/ KT) Ii = 1/2 LkENJ(I) exp( -Ci(k,c(k»)/ K T ) , where K T is a parameter, N J (l) is a set of all the nodes associated with junction l, c( i) is the other node in N J (l) that belongs to the same region as node i, and Ci(ij) is the angle between segments i and j. Non-accidentalness Non-accidentalness tries to capture the intrinsic relationships among segments [5]. In our system, an additional connection is introduced to node i if it is aligned well with a node j from the same region and j rf. N(i) initially. The connection weight Wij is a function of distance and angle between the involved ending points. This can be viewed as virtual junctions, resulting in virtual contours and conversion of a corner into a T-junction if involved nodes become figural. This corresponds to an organization criterion proposed by Geiger et al [3}. Perceptual Organization Based on Temporal Dynamics 41 Time Time Time Figure 3: Temporal behavior of each node in the network shown in Fig. 2(a). Each plot shows the status of the corresponding node with respect to time. The dashed line is 0.5. Shape information Shape information plays a central role in Gestalt principles and is incorporated through enhancing lateral connections. In this paper, we consider local symmetry. Let j and k be two neighboring nodes of i: Wij = 1 + C exp( -Iaij - akil/ KaJ * exp( -(Lj/ Lk + Lk/ Lj - 2)/ K L )), where C, KQ:, and KL are parameters and L j is the length of segment j. Essentially the lateral connections are strengthened when two neighboring segments of i are symmetric. Preferences Human perceptual systems often prefer some organizations over others. Here we incorporated a well-known figure-ground segregation principle, called closeness. In other words, the system prefers filled regions over holes. In current implementation, we set Bi == 1.0 if node i is part of a hole and otherwise Bi == o. 2.3 Temporal Properties of the Network After we construct the figure-ground segregation network, each node is updated according to (1). Fig. 3 shows the temporal behavior of the network shown in Fig. 2(a). The system approaches to a stable solution. For figure-ground segregation, we can binarize the status of each node using threshold 0.5. Thus the system generates the desired percept in a few iterations. The black region occludes other regions while gray regions occlude the white region. For example, P5 is close to 1 and thus segment 5 is figural, and PI is close to 0 and thus segment 1 is in the background. 2.4 Surface Completion After figure-ground segregation is resolved, surface completion and shape decomposition are implemented through diffusion [3]. Each boundary segment is associated with regional attributes such as the average intensity value because its ownership is known. Boundary segments are then grouped into diffusion groups based on similarities of their regional attributes and if they are occluded by common regions. In Fig. 1(b), three diffusion groups are formed, namely, the black region, two gray regions, and the white region. Segments in one diffusion group are diffused simultaneously. For a figural segment, a buffer with a given radius is generated. Within the buffer, the values are fixed to 1 for pixels belonging to the region and 0 otherwise. Now the problem becomes a well-defined mathematical problem. We need to solve 42 X Liu and D. L. Wang (c) Figure 4: Images with virtual contours. In each column, the top shows the input image and the bottom the surface completion result, where completed surfaces are shown according to their relative depths and the bottom one is the projection of all the completed surfaces. (a) Alternate pacman. (b) Reverse-contrast pacman. (c) Kanizsa triangle. (d) Woven square. (e) Double pacman. the heat equation with given boundary conditions. Currently, the heat equation is solved through local diffusion. The results from diffusion are then binarized using threshold 0.5. Fig. 2(b) shows the results for Fig. l(b) after surface completion. Here the two gray regions are grouped together through surface completion because occluded boundaries allow diffusion. The white region becomes the background, which is the entire image. 3 Experimental Results Given an image, the system automatically constructs the network and establishes the connections based on the rules discussed in Section 2.2. For all the experiments shown here, a fixed set of parameters is used. 3.1 Modal and Amodal Completion We first demonstrate that the system can simulate virtual contours and modal completion. Fig. 4 shows the input images and surface completion results. The system correctly solves figure-ground segregation problem and generates the most probable percept. Fig. 4 (a) and (b) show two variations of pacman images [9] [4]. Even though the edges have opposite contrast, the virtual rectangle is vivid. Through boundary-pair representation, our system can handle both cases using the same network. Fig. 4(c) shows a typical virtual image [6] and the system correctly simulates the percept. In Fig. 4( d) [6], the rectangular-like frame is tilted, making the order between the frame and virtual square not well-defined. Our system handles that in the temporal domain. At any given time, the system outputs one of the Perceptual Organization Based on Temporal Dynamics 43 (a) ~'---I (b) (c) (d) (e) (f) Figure 5: Surface completion results. (a) and (b) Bregman figures [1]. (c) and (d) Surface completion results for (a) and (b). (e) and (f) An image of some groceries and surface completion result. completed surfaces. Due to this, the system can also handle the case in Fig. 4(e) [2], where the percept is bistable, as the order between the two virtual squares is not well defined. Fig. 5(a) and (b) show the well-known Bregman figures [1]. In Fig. 5(a), there is no perceptual grouping and parts of B's remain fragmented. However, when occlusion is introduced as in Fig. 5(b), perceptual grouping is evident and fragments of B's are grouped together. Our results, shown in Fig. 5 (c) and (d), are consistent with the percepts. Fig. 5(e) shows an image of groceries, which is used extensively in [8]. Even though the T-junction at the bottom is locally confusing, our system gives the most plausible result through lateral influences of the other two strong T-junctions. Without search and parameter tuning, our system gives the optimal solution shown in Fig. 5(f). 3.2 Comparison with Existing Approaches As mentioned earlier, at the minimum, figure-groud segregation and grouping need to be addresssed for perceptual organization. Edge-based approaches [4] [10] attempt to solve both problems simultaneously by prefering some configurations over combinatorially many ones according to certain creteria. There are several difficulties common to those approaches. First it cannot account for different human percepts of cases where edge elements are similar. Fig. 5 (a) and (b) are wellknown examples in this regard. Another example is that the edge-only version of Fig. 4( c) does not give rise to a vivid virtual contour as in Fig. 4( c) [6]. To reduce the potential search space, often contrast signs of edges are used as additional contraints [10J. However, both Fig. 4 (a) and (b) give rise to virtual contours despite the opposite edge contrast signs. Essentially based on Fig. 4(b), Grossberg and Mingolla [4] claimed that illusory contours can join edges with different directions of contrast, which does not hold in general. As demonstrated through experiments, our approach does offer a common principle underlying these examples. Our approach shares some similarities with the one by Geiger et al [3]. In both approaches, perceptual organization is solved in two steps. In [3], figure-ground segregation is encoded implicitly in hypotheses which are defined at junction points. Because potential hypotheses are combinatorial, only a few manually chosen ones are tested in their experiments, which is not sufficient for a general computational 44 X Liu and D. L. Wang model. In our approach, by resolving figure-ground segregation, there is no need to define hypotheses explicitly. In both methods, grouping is implemented through diffusion. In [3], "heat" sources for diffusion are given manually for each hypothesis whereas our approach generates "heat" sources automatically using the figure-ground segregation results. Finally, in our approach, local ambiguities can be resolved through lateral connections using temporal dynamics, resulting in robust behavior. To obtain good results for Fig. 5(e), Nitzberg et al [8] need to tune parameters and increase their search space substantially due to the misleading T-junction at the bottom of Fig. 5(e). 4 Conclusion In this paper we have proposed a network for perceptual organization using temporal dynamics. The pair-wise boundary representation resolves the ownership ambiguity inherent in an edge-based representation and is equivalent to a surface representation through diffusion, providing a unified edge- and surface-based representation. Through temporal dynamics, our model allows for interactions among different modules and top-down influences can be incorporated. Acknowledgments Authors would like to thank S. C. Zhu and M. Wu for their valuable discussions. This research is partially supported by an NSF grant (IRI-9423312) and an ONR Young Investigator Award (N00014-96-1-0676) to DLW. References [1] A. S. Bregman, "Asking the 'What for' question in auditory perception," In Perceptual Organization, M. Kubovy and J R. Pomerantz, eds., Lawrence Erlbaum Associates, Publishers, Hillsdale, New Jersey, pp. 99-118, 1981. [2] M. Fahle and G. Palm, "Perceptual rivalry between illusory and real contours," Biological Cybernetics, vol. 66, pp. 1-8, 1991. [3] D. Geiger, H. Pao, and N. Rubin, "Salient and multiple illusory surfaces," In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 118-124, 1998. [4] S. Grossberg and E. Mingolla, "Neural dynamics of perceptual grouping: textures, boundaries, and emergent segmentations," Perception & Psychophysics, vol. 38, pp. 141-170, 1985. [5] D. G. Lowe, Perceptual Organization and Visual Recognition, Kluwer Academic Publishers, Boston, 1985. [6] G. Kanizsa, Organization in Vision, Praeger, New York, 1979. [7] K. Nakayama, Z. J . He, and S. Shimojo, "Visual surface representation: a critical link between lower-level and higher-level vision," In Visual Cognition, S. M. Kosslyn and D. N. Osherson, eds., The MIT Press, Cambridge, Massachusetts, vol. 2, pp. 1-70, 1995. [8] M. Nitzberg, D. Mumford, and T. Shiota, Filtering, Segmentation and Depth, Springer-Verlag, New York, 1993. [9] R. Shapley and J. Gordon, "The existence of interpolated illusory contours depends on contrast and spatial separation," In The Perception of Illusory Contours, S. Petry and G. E. Meyer, eds., Springer-Verlag, New York, pp. 109-115, 1987. [10] L. R. Williams and A. R. Hanson, "Perceptual Completion of Occluded Surfaces," Computer Vision and Image Understanding, vol. 64, pp. 1-20, 1996.
|
1999
|
78
|
1,729
|
Learning Informative Statistics: A Nonparametric Approach John W. Fisher III, Alexander T. IhIer, and Paul A. Viola Massachusetts Institute of Technology 77 Massachusetts Ave., 35-421 Cambridge, MA 02139 {jisher,ihler,viola}@ai.mit.edu Abstract We discuss an information theoretic approach for categorizing and modeling dynamic processes. The approach can learn a compact and informative statistic which summarizes past states to predict future observations. Furthermore, the uncertainty of the prediction is characterized nonparametrically by a joint density over the learned statistic and present observation. We discuss the application of the technique to both noise driven dynamical systems and random processes sampled from a density which is conditioned on the past. In the first case we show results in which both the dynamics of random walk and the statistics of the driving noise are captured. In the second case we present results in which a summarizing statistic is learned on noisy random telegraph waves with differing dependencies on past states. In both cases the algorithm yields a principled approach for discriminating processes with differing dynamics and/or dependencies. The method is grounded in ideas from information theory and nonparametric statistics. 1 Introduction Noisy dynamical processes abound in the world - human speech, the frequency of sun spots, and the stock market are common examples. These processes can be difficult to model and categorize because current observations are dependent on the past in complex ways. Classical models come in two sorts: those that assume that the dynamics are linear and the noise is Gaussian (e.g. Weiner etc.); and those that assume that the dynamics are discrete (e.g. HMM's). These approach are wildly popular because they are tractable and well understood. Unfortunately there are many processes where the underlying theoretical assumptions of these models are false. For example we may wish to analyze a system with linear dynamics and non-Gaussian noise or we may wish to model a system with an unknown number of discrete states. We present an information-theoretic approach for analyzing stochastic dynamic processes which can model simple processes like those mentioned above, while retaining the flexibility to model a wider range of more complex processes. The key insight is that we can often learn a simplifying informative statistic of the past from samples using non parametric estimates of both entropy and mutual information. Within this framework we can predict future states and, of equal importance, characterize the uncertainty accompanying those Learning Informative Statistics: A Nonparametric Approach 901 predictions. This non-parametric model is flexible enough to describe uncertainty which is more complex than second-order statistics. In contrast techniques which use squared prediction error to drive learning are focused on the mode of the distribution. Taking an example from financial forecasting, while the most likely sequence of pricing events is of interest, one would also like to know the accompanying distribution of price values (i.e. even if the most likely outcome is appreciation in the price of an asset, knowledge of lower, but not insignificant, probability of depreciation is also valuable). Towards that end we describe an approach that allows us to simultaneously learn the dependencies of the process on the past as well as the uncertainty of future states. Our approach is novel in that we fold in concepts from information theory, nonparametric statistics, and learning. In the two types of stochastic processes we will consider, the challenge is to summarize the past in an efficient way. In the absence of a known dynamical or probabilistic model, can we learn an informative statistic (ideally a sufficient statistic) of the past which minimizes our uncertainty about future states? In the classical linear state-space approach, uncertainty is characterized by mean squared error (MSE) which implicitly assume Gaussian statistics. There are, however, linear systems with interesting behavior due to non-Gaussian statistics which violate the assumption underlying MSE. There are also nonlinear systems and purely probabilistic processes which exhibit complex behavior and are poorly characterized by mean square error and/or the assumption of Gaussian noise. Our approach is applicable to both types of processes. Because it is based on nonparametric statistics we characterize the uncertainty of predictions in a very general way: by a density of possible future states. Consequently the resulting system captures both the dynamics of the systems (through a parameterization) and the statistics of driving noise (through a non parametric modeling). The model can then be used to classify new signals and make predictions about the future. 2 Learning from Stationary Processes In this paper we will consider two related types of stochastic processes, depicted in figure I. These processes differ in how current observations are related to the past. The first type of process, described by the following set of equations, is a discrete time dynamical (possibly nonlinear) system: Xk =G({Xk-t}N;Wg)+rJk ; {xk}N={Xk, .. . ,Xk-(N - l}} (I) where, .T k, the state of the process at time k, is a function of the N previous states and the present value of rJ. In general the sequence {Xk} is not stationary (in the strict sense); however, under fairly mild conditions on {rJk}, namely that {rJk} is a sequence of i.i.d. random variables (which we will always assume to be true), the sequence: €k = Xk - G({Xk-t}N;Wg) (2) is stationary. Often termed an innovation sequence, for our purpose the stationarity of 2 will suffice. This leads to a prediction framework for estimating the dynamical parameters, wg , of the system and to which we will adjoin a nonparametric characterization of uncertainty. The second type of process we consider is described by a conditional probability density: Xk "'p(xkll{Xk-t}N) (3) In this case it is only the conditional statistics of {Xk} that we are concerned with and they are, by definition, constant. 3 Learning Informative Statistics with Nonparametric Estimators We propose to determine the system parameters by minimizing the entropy of the error residuals for systems of type (a). Parametric entropy optimization approaches have been 902 J. W Fisher IlI, A. T. Ihler and P. A. Viola r - - - - - - - - - - - - - - - -. • + + ."11.; ~----l ,-------I (a) (b) Figure I: Two related systems: (a) dynamical system driven by stationary noise and (b) probabilistic system dependent on the finite past. Dotted box indicates source of stochastic process, while solid box indicates learning algorithm proposed (e.g. [4]), the novelty of our approach; however, is that we estimate entropy nonparametrically. That is, where the differential entropy integral is approximated using a function of the Parzen kernel density estimator [51 (in all experiments we use the Gaussian kernel). It can be shown that minimizing the entropy of the error residuals is equivalent to maximizing their likelihood [11. In this light, the proposed criterion is seeking the maximum likelihood estimate of the system parameters using a nonparametric description of the noise density. Consequently, we solve for the system parameters and the noise density jointly. While there is no explicit dynamical system in the second system type we do assume that the conditional statistics of the observed sequence are constant (or at worst slowly changing for an on-line learning algorithm). In this case we desire to minimize the uncertainty of predictions from future samples by summarizing information from the past. The challenge is to do so efficiently via a function of recent samples. Ideally we would like to find a sufficient statistic of the past; however, without an explicit description of the density we opt instead for an informative statistic. By informative statistic we simply mean one which reduces the conditional entropy of future samples. If the statistic were sufficient then the mutual information has reached a maximum [1]. As in the previous case, we propose to find such a statistic by maximizing the nonparametric mutual information as defined by arg min i (x k, F ( { x k -1 } N; W f) ) Wf (5) argmin H(Xk) + H(F({ };Wj)) - H(XbF({ };Wj))) Wf (6) = (7) By equation 6 this is equivalent to optimizing the joint and marginal entropies (which we do in practice) or, by equation 7, minimizing the conditional entropy. We have previously presented two related methods for incorporating kernel based density estimators into an information theoretic learning framework [2, 3]. We chose the method of [3J because it provides an exact gradient of an approximation to entropy, but more importantly can be converted into an implicit error function thereby reducing computation cost. Learning Informative Statistics: A Nonparametric Approach 903 4 Distinguishing Random Walks: An Example In random walk the feedback function G( {Xk-l} 1) = Xk-l. The noise is assumed to be independent and identically distributed (i.i.d.). Although the sequence,Xk, is non-stationary the increments (Xk-Xk-l) are stationary. In this context, estimating the statistics of the residuals allows for discrimination between two random walk process with differing noise densities. Furthermore, as we will demonstrate empiricalIy, even when one of the processes is driven by Gaussian noise (an implicit assumption of the MMSE criterion), such knowledge may not be sufficient to distinguish one process from another. Figure 2 shows two random walk realizations and their associate noise densities (solid lines). One is driven by Gaussian noise (17k rv N (O, l), while the other is driven by a bi-modal mixture of gaussians ('17k rv 1N(0.95,0.3) + 4N( -0.95,0.3) (note: both densities are zero-mean and unit variance). During learning, the process was modeled as fifth-order auto-regressive (AR5 ). One hundred samples were drawn from a realization of each type and the AR parameters were estimated using the standard MMSE approach and the approach described above. With regards to parameter estimation, both methods (as expected) yield essentially the same parameters with the first coefficient being near unity and the remaining coefficients being near zero. We are interested in the ability to distinguish one process from another. As mentioned. the current approach jointly estimates the parameters of the system as weII as the density of the noise. The nonparametric estimates are shown in figure 2 (dotted lines). These estimates are then be used to compute the accumulated average log-likelihood (L(EI.:) = t I:7=110gp(:ri) of the residual sequence (Ek ;:::; r/k) under the known and learned densities (figure 3). It is striking (but not surprising) that L( Ek) of the bi-modal mixture under the Gaussian model (dashed lines, top) does not differ significantly from the Gaussian driven increments process (solid lines, top). The explanation follows from the fact that (8) where Pf (€) is the true density of € (bi-modal), p( €) is the assumed density of the likelihood test (unit-variance Gaussian), and D(II) is the KuIlback-Leibler divergence [I). In this case, D(p(E)llpf(E)) is relatively small (not true for D(Pf (C) llp(E») and H(Pf (C)) is less than the entropy of the unit-variance Gaussian (for fixed variance, the Gaussian density has maximum entropy). The consequence is that the likelihood test under the Gaussian assumption does not reliably distinguish the two processes. The likelihood test under the bi-modal density or its nonparametric estimate (figure 3, bottom) does distinguish the two. The method described is not limited to linear dynamic models. It can certainly be used for nonlinear models, so long as the dynamic can be well approximated by differentiable functions. Examples for multi-layer perceptrons are described in [3]. 5 Learning the Structure of a Noisy Random Telegraph Wave A noisy random telegraph wave (RTW) can be described by figure 1 (b). Our goal is not to demonstrate that we can analyze random telegraph waves, rather that we can robustly learn an informative statistic of the past for such a process. We define a noisy random telegraph wave as a sequence Xk rv N (J.Lk, (J) where 11k is binomially distributed: {± } P{ _ } _ 1 *,~ ;V= l x k-, 1 J.Lk E J.L J.Lk -J.Lk-l - a *' ~!I IX k - . I' (9) N (J.Lk , (J) is Gaussian and a < 1. This process is interesting because the parameters are random functions of a nonlinear combination of the set {Xk} N. Depending on the value of N, we observe different switching dynamics. Figure 4 shows examples of such signals for 904 J. W Fisher III, A. T. Ihler and P. A. Viola ~ ~l2SJ ~IO : \ , 0.00 ' • 201 400 101 _ 1000 -I D It ~ ~lAKJ o III 400 ... 100 1000 -I 0 I Figure 2: Random walk examples (left), comparison of known to learned densities (right). Figure 3: L(€k) under known models (left) as compared to learned models (right). N = 20 (left) and N = 4 (right). Rapid switching dynamics are possible for both signals while N = 20 has periods with longer duration than N = 4. Figure 4: Noisy random telegraph wave: N = 20 (left), N = 4 (right) In our experiments we learn a sufficient statistic which has the form F({x.}past) ~ q (t W/;Xk-.) , (to) where u( ) is the hyperbolic tangent function (i.e. P{ } is a one layer perceptron). Note that a multi-layer perceptron could also be used [3]. In our experiments we train on 100 samples of noisy RTW(N=2o) and RTW(N=4). We then learn statistics for each type of process using M = {4, 5,15,20, 25}. This tests for situations in which the depth is both under-specified and over-specified (as well as perfectly Learning Informative Statistics: A Nonparametric Approach 905 Figure 5: Comparison of Wiener filter (top) non parametric approach (bottom) for synthesis . ...... ~ •• P Figure 6: Informative Statistics for noisy random telegraph waves. M = 25 trained on N equal 4 (left) and 20 (right). specified). We will denote FN({Xk}M) as the statistic which was trained on an RTW(N) process with a memory depth of M. Since we implicitly learn a joint density over (Xk, FN( {Xk} M)) synthesis is possible by sampling from that density. Figure 5 compares synthesis using the described method (bottom) to a Wiener filter (top) estimated over the same data. The results using the information theoretic approach (bottom) preserve the structure of the RTW while the Wiener filter results do not. This was achieved by collapsing the information of past samples into a single statistic (avoiding high dimension density estimation). Figure 6 shows the joint density over (Xk, F N ( {Xk} M )) for N = {4, 20} and M = 25. We see that the estimated densities are not separable and by virtue of this fact the learned statistic conveys information about the future. Figure 7 shows results from 100 monte carlo trials. In this case the depth of the statistic is matched to the process. Each plot shows the accumulated conditional log likelihood (L(f.k) = i E:=1 10gp(XiIFN( {Xk-l} M)) under the learned statistic with error bars. Figure 8 shows similar results after varying the memory depth M = {4, 5,15,20, 25} of the statistic. The figures illustrate robustness to choice of memory depth M. This is not to say that memory depth doesn't matter; that is, there must be some information to exploit, but the empirical results indicate that useful information was extracted. 6 Conclusions We have described a nonpararnetric approach for finding informative statistics. The approach is novel in that learning is derived from nonpararnetric estimators of entropy and mutual information. This allows for a means by which to 1) efficiently summarize the past, 2) predict the future and 3) characterize the uncertainty of those predictions beyond second-order statistics. Futhermore, this was accomplished without the strong assumptions accompanying parametric approaches. 906 1. W Fisher Ill, A. T. Ihler and P. A. Viola Figure 7: Conditional L(€k). Solid line indicates RTW(N=20) while dashed line indicates RTW(N=4). Thick lines indicate the average over all monte carlo runs while the thin lines indicate ±1 standard deviation. The left plot uses a statistic trained on RTW(N=20) while the right plot uses a statistic trained on RTW(N=4). 1.'~ ." •••• ,,-==--= :~:""~Z= ... O.OO~--;"'~--;I"'~---;I-!;_;------;_;!;;---='" Figure 8: Repeat of figure 7 for cases with M = {4, 5, 15,20, 25}. Obvious breaks indicate a new set of trials We also presented empirical results which illustrated the utility of our approach. The example of random walk served as a simple illustration in learning a dynamic system in spite of the over-specification of the AR model. More importantly, we demonstrated the ability to learn both the dynamic and the statistics of the underlying noise process. This information was later used to distinguish realizations by their non parametric densities, something not possible using MMSE error prediction. An even more compelling result were the experiments with noisy random telegraph waves. We demonstrated the algorithms ability to learn a compact statistic which efficiently summarized the past for process identification. The method exhibited robustness to the number of parameters of the learned statistic. For example, despite overspecifying the dependence of the memory-4 in three of the cases, a useful statistic was still found. Conversely, despite the memory-20 statistic being underspecified in three of the experiments, useful information from the available past was extracted. It is our opinion that this method provides an alternative to some of the traditional and connectionist approaches to time-series analysis. The use of nonparametric estimators adds flexibility to the class of densities which can be modeled and places less of a constraint on the exact form of the summarizing statistic. References [1] T. Cover and J. Thomas. Elements of Information Theory. John Wiley & Sons, New York, 199]. [2] P. Viola et al. Empricial entropy manipulation for real world problems. In Mozer Touretsky and Hasselmo, editors, Advances in Neural Information ProceSSing Systems, pages ?-?, ] 996. [3] J.w. Fisher and J.e. Principe. A methodology for information theoretic feature extraction. In A. Stuberud, editor, Proc. of the IEEE Int loint Conf on Neural Networks, pages ?-?, ] 998. [4] 1. Kapur and H. Kesavan. Entropy Optimization Principles with Applications. Academic Press, New York, ] 992. [5] E. Parzen. On estimation of a probability density function and mode. Ann. of Math Stats., 33:1065-1076, 1962.
|
1999
|
79
|
1,730
|
Emergence of Topography and Complex Cell Properties from Natural Images using Extensions of ICA Aapo Hyviirinen and Patrik Hoyer Neural Networks Research Center Helsinki University of Technology P.O. Box 5400, FIN-02015 HUT, Finland aapo.hyvarinen~hut.fi, patrik.hoyer~hut.fi http://www.cis.hut.fi/projects/ica/ Abstract Independent component analysis of natural images leads to emergence of simple cell properties, Le. linear filters that resemble wavelets or Gabor functions. In this paper, we extend ICA to explain further properties of VI cells. First, we decompose natural images into independent subspaces instead of scalar components. This model leads to emergence of phase and shift invariant features, similar to those in VI complex cells. Second, we define a topography between the linear components obtained by ICA. The topographic distance between two components is defined by their higher-order correlations, so that two components are close to each other in the topography if they are strongly dependent on each other. This leads to simultaneous emergence of both topography and invariances similar to complex cell properties. 1 Introduction A fundamental approach in signal processing is to design a statistical generative model of the observed signals. Such an approach is also useful for modeling the properties of neurons in primary sensory areas. The basic models that we consider here express a static monochrome image J (x, y) as a linear superposition of some features or basis functions bi (x, y): n J(x, y) = 2: bi(x, Y)Si (1) i=l where the Si are stochastic coefficients, different for each image J(x, y). Estimation of the model in Eq. (1) consists of determining the values of Si and bi(x, y) for all i and (x, y), given a sufficient number of observations of images, or in practice, image patches J(x,y). We restrict ourselves here to the basic case where the bi(x,y) form an invertible linear system. Then we can invert Si =< Wi, J > where the Wi denote the inverse filters, and < Wi, J >= L.x,y Wi(X, y)J(x, y) denotes the dot-product. 828 A. Hyviirinen and P Hoyer The Wi (x, y) can then be identified as the receptive fields of the model simple cells, and the Si are their activities when presented with a given image patch I(x, y). In the basic case, we assume that the Si are nongaussian, and mutually independent. This type of decomposition is called independent component analysis (ICA) [3, 9, 1, 8], or sparse coding [13]. Olshausen and Field [13] showed that when this model is estimated with input data consisting of patches of natural scenes, the obtained filters Wi(X,y) have the three principal properties of simple cells in VI: they are localized, oriented, and bandpass (selective to scale/frequency). Van Hateren and van der Schaaf [15] compared quantitatively the obtained filters Wi(X, y) with those measured by single-cell recordings of the macaque cortex, and found a good match for most of the parameters. We show in this paper that simple extensions of the basic ICA model explain emergence of further properties of VI cells: topography and the invariances of complex cells. Due to space limitations, we can only give the basic ideas in this paper. More details can be found in [6, 5, 7]. First, using the method of feature subspaces [11], we model the response of a complex cell as the norm of the projection of the input vector (image patch) onto a linear subspace, which is equivalent to the classical energy models. Then we maximize the independence between the norms of such projections, or energies. Thus we obtain features that are localized in space, oriented, and bandpass, like those given by simple cells, or Gabor analysis. In contrast to simple linear filters, however, the obtained feature subspaces also show emergence of phase invariance and (limited) shift or translation invariance. Maximizing the independence, or equivalently, the sparseness of the norms of the projections to feature subspaces thus allows for the emergence of exactly those invariances that are encountered in complex cells. Second, we extend this model of independent subspaces so that we have overlapping subspaces, and every subspace corresponds to a neighborhood on a topographic grid. This is called topographic ICA, since it defines a topographic organization between components. Components that are far from each other on the grid are independent, like in ICA. In contrast, components that are near to each other are not independent: they have strong higher-order correlations. This model shows emergence of both complex cell properties and topography from image data. 2 Independent subspaces as complex cells In addition to the simple cells that can be modelled by basic ICA, another important class of cells in VI is complex cells. The two principal properties that distinguish complex cells from simple cells are phase invariance and (limited) shift invariance. The purpose of the first model in this paper is to explain the emergence of such phase and shift invariant features using a modification of the ICA model. The modification is based on combining the principle of invariant-feature subspaces [11] and the model of multidimensional independent component analysis [2]. Invariant feature subspaces. The principle of invariant-feature subspaces states that one may consider an invariant feature as a linear subspace in a feature space. The value of the invariant, higher-order feature is given by (the square of) the norm of the projection of the given data point on that subspace, which is typically spanned by lower-order features. A feature subspace, as any linear subspace, can always be represented by a set of orthogonal basis vectors, say Wi(X, y), i = 1, ... , m, where m is the dimension of the subspace. Then the value F(I) of the feature F with input vector I(x, y) is given by F(I) = L::l < Wi, I >2, where a square root Emergence of VI properties using Extensions of leA 829 might be taken. In fact, this is equivalent to computing the distance between the input vector I (X, y) and a general linear combination of the basis vectors (filters) Wi(X, y) of the feature subspace [11]. In [11], it was shown that this principle, when combined with competitive learning techniques, can lead to emergence of invariant image features. Multidimensional independent component analysis. In multidimensional independent component analysis [2] (see also [12]), a linear generative model as in Eq. (1) is assumed. In contrast to ordinary leA, however, the components (responses) Si are not assumed to be all mutually independent. Instead, it is assumed that the Si can be divided into couples, triplets or in general m-tuples, such that the Si inside a given m-tuple may be dependent on each other, but dependencies between different m-tuples are not allowed. Every m-tuple of Si corresponds to m basis vectors bi(x, y). The m-dimensional probability densities inside the m-tuples of Si is not specified in advance in the general definition of multidimensional leA [2]. In the following, let us denote by J the number of independent feature subspaces, and by Sj,j = 1, ... , J the set of the indices of the Si belonging to the subspace of index j . Independent feature subspaces. Invariant-feature subspaces can be embedded in multidimensional independent component analysis by considering probability distributions for the m-tuples of Si that are spherically symmetric, i.e. depend only on the norm. In other words, the probability density Pj (.) of the m-tuple with index j E {1, ... , J}, can be expressed as a function of the sum of the squares of the si,i E Sj only. For simplicity, we assume further that the Pj(') are equal for all j, i.e. for all subspaces. Assume that the data consists of K observed image patches I k (x, y), k = 1, ... , K. Then the logarithm of the likelihood L of the data given the model can be expressed as K J 10gL(wi(x, y), i = L.n) = L L 10gp(L < Wi, h >2) + Klog I det WI (2) k=1 j=1 iESj where P(LiESj sT) = pj(si,i E Sj) gives the probability density inside the j-th m-tuple of Si, and W is a matrix containing the filters Wi(X, y) as its columns. As in basic leA, prewhitening of the data allows us to consider the Wi(X, y) to be orthonormal, and this implies that log I det WI is zero [6]. Thus we see that the likelihood in Eq. (2) is a function of the norms of the projections of Ik(x,y) on the subspaces indexed by j, which are spanned by the orthonormal basis sets given by Wi(X, y), i E Sj. Since the norm of the projection of visual data on practically any subspace has a supergaussian distribution, we need to choose the probability density P in the model to be sparse [13], i.e. supergaussian [8]. For example, we could use the following probability distribution logp( L st) = -O:[L s~11/2 + {3, (3) iESj iESj which could be considered a multi-dimensional version of the exponential distribution. Now we see that the estimation of the model consists of finding subspaces such that the norms of the projections of the (whitened) data on those subspaces have maximally sparse distributions. The introduced "independent (feature) subspace analysis" is a natural generalization of ordinary leA. In fact, if the projections on the subspaces are reduced to dotproducts, i.e. projections on 1-D subs paces , the model reduces to ordinary leA 830 A. Hyviirinen and P. Hoyer (provided that, in addition, the independent components are assumed to have nonskewed distributions). It is to be expected that the norms of the projections on the subspaces represent some higher-order, invariant features. The exact nature of the invariances has not been specified in the model but will emerge from the input data, using only the prior information on their independence. When independent subspace analysis is applied to natural image data, we can identify the norms of the projections (2:iESj st)1/2 as the responses of the complex cells. If the individual filter vectors Wi(X, y) are identified with the receptive fields of simple cells, this can be interpreted as a hierarchical model where the complex cell response is computed from simple cell responses Si, in a manner similar to the classical energy models for complex cells. Experiments (see below and [6]) show that the model does lead to emergence of those invariances that are encountered in complex cells. 3 Topographic leA The independent subspace analysis model introduces a certain dependence structure for the components Si. Let us assume that the distribution in the subspace is sparse, which means that the norm of the projection is most of the time very near to zero. This is the case, for example, if the densities inside the subspaces are specified as in (3). Then the model implies that two components Si and Sj that belong to the same subspace tend to be nonzero simultaneously. In other words, s; and S] are positively correlated. This seems to be a preponderant structure of dependency in most natural data. For image data, this has also been noted by Simoncelli [14). Now we generalize the model defined by (2) so that it models this kind of dependence not only inside the m-tuples, but among all ''neighboring'' components. A neighborhood relation defines a topographic order [10). (A different generalization based on an explicit generative model is given in [5].) We define the model by the following likelihood: K n n 10gL(wi(x,y),i = 1, ... ,n) = LLG(Lh(i,j) < Wi,h >2) +KlogldetWI (4) k=I j=l i=l Here, h(i, j) is a neighborhood function, which expresses the strength of the connection between the i-th and j-th units. The neighborhood function can be defined in the same way as with the self-organizing map [10). Neighborhoods can thus be defined as one-dimensional or two-dimensional; 2-D neighborhoods can be square or hexagonal. A simple example is to define a 1-D neighborhood relation by h(i,j) = {I, if Ii - ~I ~ m 0, otherwIse. (5) The constant m defines here the width of the neighborhood. The function G has a similar role as the log-density of the independent components in classic ICA. For image data, or other data with a sparse structure, G should be chosen as in independent subspace analysis, see Eq. (3). Properties of the topographic leA model. Here, we consider for simplicity only the case of sparse data. The first basic property is that all the components Si are uncorrelated, as can be easily proven by symmetry arguments [5]. Moreover, their variances can be defined to be equal to unity, as in classic ICA. Second, components Si and S j that are near to each other, Le. such that h( i, j) is significantly non-zero, Emergence oj VI properties using Extensions oj leA 831 tend to be active (non-zero) at the same time. In other words, their energies sf and s; are positively correlated. Third, latent variables that are far from each other are practically independent. Higher-order correlation decreases as a function of distance, assuming that the neighborhood is defined in a way similar to that in (5). For details, see [5]. Let us note that our definition of topography by higher-order correlations is very different from the one used in practically all existing topographic mapping methods. Usually, the distance is defined by basic geometrical relations like Euclidean distance or correlation. Interestingly, our principle makes it possible to define a topography even among a set of orthogonal vectors whose Euclidean distances are all equal. Such orthogonal vectors are actually encountered in leA, where the basis vectors and filters can be constrained to be orthogonal in the whitened space. 4 Experiments with natural image data We applied our methods on natural image data. The data was obtained by taking 16 x 16 pixel image patches at random locations from monochrome photographs depicting wild-life scenes (animals, meadows, forests, etc.). Preprocessing consisted of removing the De component and reducing the dimension of the data to 160 by peA. For details on the experiments, see [6, 5]. Fig. 1 shows the basis vectors of the 40 feature subspaces (complex cells), when subspace dimension was chosen to be 4. It can be seen that the basis vectors associated with a single complex cell all have approximately the same orientation and frequency. Their locations are not identical, but close to each other. The phases differ considerably. Every feature subspace can thus be considered a generalization of a quadrature-phase filter pair as found in the classical energy models, enabling the cell to be selective to some given orientation and frequency, but invariant to phase and somewhat invariant to shifts. Using 4 dimensions instead of 2 greatly enhances the shift invariance of the feature subspace. In topographic leA, the neighborhood function was defined so that every neighborhood consisted of a 3 x 3 square of 9 units on a 2-D torus lattice [10]. The obtained basis vectors, are shown in Fig. 2. The basis vectors are similar to those obtained by ordinary leA of image data [13, 1]. In addition, they have a clear topographic organization. In addition, the connection to independent subspace analysis is clear from Fig. 2. Two neighboring basis vectors in Fig. 2 tend to be of the same orientation and frequency. Their locations are near to each other as well. In contrast, their phases are very different. This means that a neighborhood of such basis vectors, i.e. simple cells, is similar to an independent subspace. Thus it functions as a complex cell. This was demonstrated in detail in [5]. 5 Discussion We introduced here two extensions of leA that are especially useful for image modelling. The first model uses a subspace representation to model invariant features. It turns out that the independent subspaces of natural images are similar to complex cells. The second model is a further extension of the independent subspace model. This topographic leA model is a generative model that combines topographic mapping with leA. As in all topographic mappings, the distance in the representation space (on the topographic "grid") is related to some measure of distance between represented components. In topographic leA, the distance between represented components is defined by higher-order correlations, which gives 832 A. Hyviirinen and P Hoyer the natural distance measure in the context of leA. An approach closely related to ours is given by Kohonen's Adaptive Subspace SelfOrganizing Map [11). However, the emergence of shift invariance in [11) was conditional to restricting consecutive patches to come from nearby locations in the image, giving the input data a temporal structure like in a smoothly changing image sequence. Similar developments were given by F6ldiak [4). In contrast to these two theories, we formulated an explicit image model. This independent subspace analysis model shows that emergence of complex cell properties is possible using patches at random, independently selected locations, which proves that there is enough information in static images to explain the properties of complex cells. Moreover, by extending this subspace model to model topography, we showed that the emergence of both topography and complex cell properties can be explained by a single principle: neighboring cells should have strong higher-order correlations. References [1] A.J. Bell and T.J. Sejnowski. The 'independent components' of natural scenes are edge filters. Vision Research, 37:3327-3338, 1997. [2] J.-F. Cardoso. Multidimensional independent component analysis. In Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP'98), Seattle, WA, 1998. [3] P. Comon. Independent component analysis - a new concept? Signal Processing, 36:287-314, 1994. [4] P. Foldiak. Learning invariance from transformation sequences. Neural Computation, 3:194-200, 1991. [5] A. Hyvarinen and P. O. Hoyer. Topographic independent component analysis. 1999. Submitted, available at http://www.cis.hut.firaapo/. [6] A. Hyvarinen and P. O. Hoyer. Emergence of phase and shift invariant features by decomposition of natur:al images into independent feature subspaces. Neural Computation, 2000. (in press). [7] A. Hyvarinen, P. O. Hoyer, and M. Inki. The independence assumption: Analyzing the independence of the components by topography. In M. Girolami, editor, Advances in Independent Component Analysis. Springer-Verlag, 2000. in press. [8] A. Hyvarinen and E. Oja. A fast fixed-point algorithm for independent component analysis. Neural Computation, 9(7):1483-1492, 1997. [9] C. Jutten and J. Herault. Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture. Signal Processing, 24:1-10, 1991. [10] T. Kohonen. Self-Organizing Maps. Springer-Verlag, Berlin, Heidelberg, New York, 1995. [11] T. Kohonen. Emergence of invariant-feature detectors in the adaptive-subspace selforganizing map. Biological Cybernetics, 75:281-291, 1996. [12] J. K. Lin. Factorizing multivariate function classes. In Advances in Neural Information Processing Systems, volume 10, pages 563-569. The MIT Press, 1998. [13] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607-609, 1996. [14] E. P. Simoncelli and O. Schwartz. Modeling surround suppression in VI neurons with a statistically-derived normalization model. In Advances in Neural Information Processing Systems 11, pages 153-159. MIT Press, 1999. [15] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex. Proc. Royal Society ser. B, 265:359-366, 1998. Emergence of Vi properties using Extensions of leA 833 ", ;II :II • • • .. • • - , .. • • • • • • • • • • • • • • • • '11'1 .; Figure 1: Independent subspaces of natural image data. The model gives Gaborlike basis vectors for image windows. Every group of four basis vectors corresponds to one independent feature subspace, or complex cell. Basis vectors in a subspace are similar in orientation, location and frequency. In contrast, their phases are very different. .• • • I " ~ IiioiII iii I I • • iii I; I • .. .. , • 'i .. " • ~ Figure 2: Topographic leA of natural image data. This gives Gabor-like basis vectors as well. Basis vectors that are similar in orientation, location and/or frequency are close to each other. The phases of near by basis vectors are very different, giving each neighborhood properties similar to a complex cell.
|
1999
|
8
|
1,731
|
Rules and Similarity in Concept Learning Joshua B. Tenenbaum Department of Psychology Stanford University, Stanford, CA 94305 jbt@psych.stanford.edu Abstract This paper argues that two apparently distinct modes of generalizing concepts - abstracting rules and computing similarity to exemplars - should both be seen as special cases of a more general Bayesian learning framework. Bayes explains the specific workings of these two modes - which rules are abstracted, how similarity is measured - as well as why generalization should appear rule- or similarity-based in different situations. This analysis also suggests why the rules/similarity distinction, even if not computationally fundamental, may still be useful at the algorithmic level as part of a principled approximation to fully Bayesian learning. 1 Introduction In domains ranging from reasoning to language acquisition, a broad view is emerging of cognition as a hybrid of two distinct modes of computation, one based on applying abstract rules and the other based on assessing similarity to stored exemplars [7]. Much support for this view comes from the study of concepts and categorization. In generalizing concepts, people's judgments often seem to reflect both rule-based and similarity-based computations [9], and different brain systems are thought to be involved in each case [8]. Recent psychological models of classification typically incorporate some combination of rule-based and similarity-based modules [1,4]. In contrast to this currently popular modularity position, I will argue here that rules and similarity are best seen as two ends of a continuum of possible concept representations. In [11,12], I introduced a general theoretical framework to account for how people can learn concepts from just a few positive examples based on the principles of Bayesian inference. Here I explore how this framework provides a unifying explanation for these two apparently distinct modes of generalization. The Bayesian framework not only includes both rules and similarity as special cases but also addresses several questions that conventional modular accounts do not. People employ particular algorithms for selecting rules and measuring similarity. Why these algorithms as opposed to any others? People's generalizations appear to shift from similarity-like patterns to rule-like patterns in systematic ways, e.g., as the number of examples observed increases. Why these shifts? This short paper focuses on a simple learning game involving number concepts, in which both rule-like and similarity-like generalizations clearly emerge in the judgments of human subjects. Imagine that I have written some short computer programs which take as input a natural number and return as output either "yes" or "no" according to whether that number 60 J. B. Tenenbaum satisfies some simple concept. Some possible concepts might be "x is odd", "x is between 30 and 45", "x is a power of3", or"x is less than 10". For simplicity, we assume that only numbers under 100 are under consideration. The learner is shown a few randomly chosen positive examples - numbers that the program says "yes" to - and must then identify the other numbers that the program would accept. This task, admittedly artificial, nonetheless draws on people's rich knowledge of number while remaining amenable to theoretical analysis. Its structure is meant to parallel more natural tasks, such as word learning, that often require meaningful generalizations from only a few positive examples of a concept. Section 2 presents representative experimental data for this task. Section 3 describes a Bayesian model and contrasts its predictions with those of models based purely on rules or similarity. Section 4 summarizes and discusses the model's applicability to other domains. 2 The number concept game Eight subjects participated in an experimental study of number concept learning, under essentially the same instructions as those given above [11]. On each trial, subj ects were shown one or more random positive examples of a concept and asked to rate the probability that each of 30 test numbers would belong to the same concept as the examples observed. X denotes the set of examples observed on a particular trial, and n the number of examples. Trials were designed to fall into one of three classes. Figure la presents data for two representative trials of each class. Bar heights represent the average judged probabilities that particular test numbers fall under the concept given one or more positive examples X, marked by "*"s. Bars are shown only for those test numbers rated by subjects; missing bars do not denote zero probability of generalization, merely missing data. On class I trials, subjects saw only one example of each concept: e.g., X = {16} and X = {60}. To minimize bias, these trials preceded all others on which multiple examples were given. Given only one example, people gave most test numbers fairly similar probabilities of acceptance. Numbers that were intuitively more similar to the example received slightly higher ratings: e.g., for X = {16}, 8 was more acceptable than 9 or 6, and 17 more than 87; for X = {60}, 50 was more acceptable than 51, and 63 more than 43. The remaining trials each presented four examples and occured in pseudorandom order. On class II trials, the examples were consistent with a simple mathematical rule: X = {16, 8, 2, 64} or X = {60, 80, 10, 30}. Note that the obvious rules, "powers of two" and "multiples often", are in no way logically implied by the data. "Multiples offive" is a possibility in the second case, and "even numbers" or "all numbers under 80" are possibilities in both, not to mention other logically possible but psychologically implausible candidates, such as "all powers of two, except 32 or4". Nonetheless, subjects overwhelmingly followed an all-or-none pattern of generalization, with all test numbers rated near 0 or 1 according to whether they satisified the single intuitively "correct" rule. These preferred rules can be loosely characterized as the most specific rules (i.e., with smallest extension) that include all the examples and that also meet some criterion of psychological simplicity. On class III trials, the examples satisified no simple mathematical rule but did have similar magnitudes: X = {16, 23, 19, 20} and X = {60, 52, 57, 55}. Generalization now followed a similarity gradient along the dimension of magnitude. Probability ratings fell below 0.5 for numbers more than a characteristic distance e beyond the largest or smallest observed examples - roughly the typical distance between neighboring examples ("'" 2 or 3). Logically, there is no reason why participants could not have generalized according to Rules and Similarity in Concept Learning 61 various complex rules that happened to pick out the given examples, or according to very different values of~, yet all subjects displayed more or less the same similarity gradients. To summarize these data, generalization from a single example followed a weak similarity gradient based on both mathematical and magnitude properties of numbers. When several more examples were observed, generalization evolved into either an all-or-none pattern determined by the most specific simple rule, or, when no simple rule applied, a more articulated magnitude-based similarity gradient falling off with characteristic distance e roughly equal to the typical separation between neighboring examples. Similar patterns were observed on several trials not shown (including one with a different value of e) and on two other experiments in quite different domains (described briefly in Section 4). 3 The Bayesian model In [12], I introduced a Bayesian framework for concept learning in the context oflearning axis-parallel rectangles in a multidimensional feature space. Here I show that the same framework can be adapted to the more complex situation oflearning number concepts and can explain all of the phenomena of rules and similarity documented above. Formally, we observe n positive examples X = {x(1), ... , x(n)} of concept C and want to compute p(y E CIX), the probability that some new object y belongs to C given the observations X. Inductive leverage is provided by a hypothesis space 11. of possible concepts and a probabilistic model relating hypotheses h to data X. The hypothesis space. Elements ofll. correspond to subsets of the universe of objects that are psychologically plausible candidates for the extensions of concepts. Here the universe consists of numbers between 1 and 100, and the hypotheses correspond to subsets such as the even numbers, the numbers between 1 and 10, etc. The hypotheses can be thought of in terms of either rules or similarity, i.e., as potential rules to be abstracted or as features entering into a similarity computation, but Bayes does not distinguish these interpretations. Because we can capture only a fraction of the hypotheses people might bring to this task, we would like an objective way to focus on the most relevant parts of people's hypothesis space. One such method is additive clustering (ADCLUS) [6,10], which extracts a setoffeatures that best accounts for subjects' similarity judgments on a given set of objects. These features simply correspond to subsets of objects and are thus naturally identified with hypotheses for concept learning. Applications of ADCLUS to similarity judgments for the numbers 0-9 reveal two kinds of subsets [6,10]: numbers sharing a common mathematical property, such as {2, 4, 8} and {3, 6, 9}, and consecutive numbers of similar magnitude, such as {I, 2, 3, 4} and {2, 3, 4, 5, 6}. Applying ADCLUS to the full set of numbers from 1 to 100 is impractical, but we can construct an analogous hypothesis space for this domain based on the two kinds of hypotheses found in the ADCLUS solution for 0-9. One group of hypotheses captures salient mathematical properties: odd, even, square, cube, and prime numbers, multiples and powers of small numbers (~ 12), and sets of numbers ending in the same digit. A second group of hypotheses, representing the dimension of numerical magnitude, includes all intervals of consecutive numbers with endpoints between 1 and 100. Priors and likelihoods. The probabilistic model consists of a prior p( h) over 11. and a likelihood p( X I h) for each hypothesis h E H. Rather than assigning prior probabilities to each ofthe 5083 hypotheses individually, I adopted a hierarchical approach based on the intuitive division of 11. into mathematical properties and magnitude intervals. A fraction A of the total probability was allocated to the mathematical hypotheses as a group, leaving (1 - A) for 62 J. B. Tenenbaum the magnitude hypotheses. The ,\ probability was distributed uniformly across the mathematical hypotheses. The (1 - ,\) probability was distributed across the magnitude intervals as a function of interval size according to an Erlang distribution, p( h) ex (Ihl/ li2)e- 1hl/0', to capture the intuition that intervals of some intermediate size are more likely than those of very large or small size. ,\ and Ii are treated as free parameters of the model. The likelihood is determined by the assumption of randomly sampled positive examples. In the simplest case, each example in X is assumed to be independently sampled from a uniform density over the concept G. For n examples we then have: p(Xlh) l/lhl n if Vj, xU) E h o otherwise, (1) where I h I denotes the size of the subset h. For example, if h denotes the even numbers, then Ihl = 50, because there are 50 even numbers between I and 100. Equation I embodies the size principle for scoring hypotheses: smaller hypotheses assign greater likelihood than do larger hypotheses to the same data, and they assign exponentially greater likelihood as the number of consistent examples increases. The size principle plays a key role in learning concepts from only positive examples [12], and, as we will see below, in determining the appearance of rule-like or similarity-like modes of generalization. Given these priors and likelihoods, the posterior p( hlX) follows directly from Bayes' rule. Finally, we compute the probability of generalization to a new object y by averaging the predictions of all hypotheses weighted by their posterior probabilities p( h IX): p(y E GIX) = L p(y E Glh)p(hIX). (2) hE1i Equation 2 follows from the conditional independence of X and the membership of y E G, given h. To evaluate Equation 2, note that p(y E Glh) is simply 1 ify E h, and 0 otherwise. Model results. Figure Ib shows the predictions of this Bayesian model (with'\ = 1/2, Ii = 10). The model captures the main features of the data, including convergence to the most specific rule on Class II trials and to appropriately shaped similarity gradients on Class III trials. We can understand the transitions between graded, similarity-like and all-or-none, rule-like regimes of generalization as arising from the interaction of the size principle (Equation 1) with hypothesis averaging (Equation 2). Because each hypothesis h contributes to the average in Equation 2 in proportion to its posterior probability p(hIX), the degree of uncertainty in p(hIX) determines whether generalization will be sharp or graded. When p( h IX) is very spread out, many distinct hypotheses contribute significantly, resulting in a broad gradient of generalization. When p(hIX) is concentrated on a single hypothesis h*, only h* contributes significantly and generalization appears all-or-none. The degree of uncertainty in p( h I X) is in tum a consequence of the size principle. Given a few examples consistent with one hypothesis that is significantly smaller than the next-best competitor - such as X = {16, 8, 2, 64}, where "powers of two" is significantly smaller than "even numbers" - then the smallest hypothesis becomes exponentially more likely than any other and generalization appears to follow this most specific rule. However, given only one example (such as X = {16}), or given several examples consistent with many similarly sized hypothesessuch as X = {16, 23,19, 20}, where the top candidates are all very similar intervals: "numbers between 16 and 23", "numbers between 15 and 24", etc. - the size-based likelihood favors the smaller hypotheses only slightly, p(hIX) is spread out over many overlapping hypotheses and generalization appears to follow a gradient of similarity. That the Bayesian Rules and Similarity in Concept Learning 63 model predicts the right shape for the magnitude-based similarity gradients on Class III trials is no accident. The characteristic distance € of the Bayesian generalization gradient varies with the uncertainty in p( h I X), which (for interval hypotheses) can be shown to covary with the intuitively relevant factor of average separation between neighboring examples. Bayes vs. rules or similarity alone. It is instructive to consider two special cases of the Bayesian model that are equivalent to conventional similarity-based and rule-based algorithms from the concept learning literature. What I call the SIM algorithm was pioneered by [5] and also described in [2,3] as a Bayesian approach to learning concepts from both positive and negative evidence. SIM replaces the size-based likelihood with a binary likelihood that measures only whether a hypothesis is consistent with the examples: p( X I h) :::: 1 ifVj, xli) E h, and 0 otherwise. Generalization under SIM is just a count of the features shared by y and all the examples in X, independent of the frequency of those features or the number of examples seen. As Figure Ic shows, SIM successfully models generalization from a single example (Class I) but fails to capture how generalization sharpens up after multiple examples, to either the most specific rule (Class II) or a magnitude-based similarity gradient with appropriate characteristic distance € (Class III). What I call the MIN algorithm preserves the size principle but replaces the step of hypothesis averaging with maximization: p(y E GIX) :::: 1 ify E arg maXh p(Xlh), and 0 otherwise. MIN is perhaps the oldest algorithm for concept learning [3] and, as a maximum likelihood algorithm, is asymptotically equivalent to Bayes. Its success for finite amounts of data depends on how peaked p(hIX) is (Figure Id). MIN always selects the most specific consistent rule, which is reasonable when that hypothesis is much more probable than any other (Class II), but too conservative in other cases (Classes I and III). In quantitative terms, the predictions of Bayes correlate much more highly with the observed data (R2 :::: 0.91) than do the predictions of either SIM (R2 :::: 0.74) or MIN (R2 :::: 0.47). In sum, only the full Bayesian framework can explain the full range of rule-like and similarity-like generalization patterns observed on this task. 4 Discussion Experiments in two other domains provide further support for Bayes as a unifying framework for concept learning. In the context of multidimensional continuous feature spaces, similarity gradients are the default mode of generalization [5]. Bayes successfully models how the shape of those gradients depends on the distribution and number of examples; SIM and MIN do not [12]. Bayes also successfully predicts how fast these similarity gradients converge to the most specific consistent rule. Convergence is quite slow in this domain (n "" 50) because the hypothesis space consists of densely overlapping subsets - axisparallel rectangles - much like the interval hypotheses in the Class III number tasks. Another experiment engaged a word-learning task, using photographs of real objects as stimuli and a cover story oflearning a new language [11]. On each trial, subjects saw either one example of a novel word (e.g., a toy animal labeled with "Here is a blicket."), or three examples at one of three different levels of specificity: subordinate (e.g., 3 dalmatians labeled with "Here are three blickets."), basic (e.g., 3 dogs), or superordinate (e.g., 3 animals). They then were asked to pick the other instances of that concept from a set of 24 test objects, containing matches to the example(s) at all levels (e.g., other dalmatians, dogs, animals) as well as many non-matching objects. Figure 2 shows data and predictions for all three models. Similarity-like generalization given one example rapidly converged to the most specific rule after only three examples were observed, just as in the number task (Classes I and II) but in contrast to the axis-parallel rectangle task or the Class III num, 64 J. B. Tenenbaum ber tasks, where similarity-like responding was still the norm after three or four examples. For modeling purposes, a hypothesis space was constructed from a hierarchical clustering of subjects' similarity judgments (augmented by an a priori preference for basic-level concepts) [11]. The Bayesian model successfully predicts rapid convergence from a similarity gradient to the minimal rule, because the smallest hypothesis consistent with each example set is significantly smaller than the next-best competitor (e.g., "dogs" is significantly smaller than "dogs and cats", just as with "multiples often" vs. "multiples of five"). Bayes fits the full data extremely well (R2 = 0.98); by comparison, SIM (R2 = 0.83) successfully accounts for only the n = 1 trials and MIN (R2 = 0.76), the n = 3 trials. In conclusion, a Bayesian framework is able to account for both rule- and similarity-like modes of generalization, as well as the dynamics of transitions between these modes, across several quite different domains of concept learning. The key features of the Bayesian model are hypothesis averaging and the size principle. The former allows either rule-like or similarity-like behavior depending on the uncertainty in the posterior probability. The latter determines this uncertainty as a function of the number and distribution of examples and the structure ofthe learner's hypothesis space. With sparsely overlapping hypotheses - i.e., the most specific hypothesis consistent with the examples is much smaller than its nearest competitors - convergence to a single rule occurs rapidly, after just a few examples. With densely overlapping hypotheses - i.e., many consistent hypotheses of comparable size - convergence to a single rule occurs much more slowly, and a gradient of similarity is the norm after just a few examples. Importantly, the Bayesian framework does not so much obviate the distinction between rules and similarity as explain why it might be useful in understanding the brain. As Figures 1 and 2 show, special cases of Bayes corresponding to the SIM and MIN algorithms consistently account for distinct and complementary regimes of generalization. SIM, without the size principle, works best given only one example or densely overlappipg hypotheses, when Equation I does not generate large differences in likelihood. MIN, without hypothesis averaging, works best given many examples or sparsely overlapping hypotheses, when the most specific hypothesis dominates the sum over 1i in Equation 2. In light of recent brain-imaging studies dissociating rule- and exemplarbased processing [8], the Bayesian theory may best be thought of as a computational-level account of concept learning, with multiple subprocesses - perhaps subserving SIM and MIN - implemented in distinct neural circuits. I hope to explore this possibility in future work. References [1] M. Erickson & J. Kruschke (1998). Rules and exemplars in category learning. JEP: General 127, 107-140. [2] D. Haussler, M. Kearns, & R. Schapire (1994). Bounds on the sample complexity of Bayesian learning using information theory and the VC-dimension. Machine Learning 14,83-113. [3] T. Mitchell (1997). Machine Learning. McGraw-Hill. [4] R. Nosofsky & T. Palmeri (1998). A rule-plus-exception model for classifying objects in continuous-dimension spaces. Psychonomic Bull. & Rev. 5,345-369. [5] R. Shepard (1987). Towards a universal law of generalization for psychological science. Science 237, 1317-1323. [6] R. Shepard & P. Arabie (1979). Additive clustering: Representation of similarities as combinations of discrete overlapping properties. Psych. Rev. 86, 87-123. [7] S. Sloman & L. Rips (1998). Similarity and Symbols in Human Thinking. MIT Press. [8] E. Smith, A. Patalano & 1. Jonides (1998). Alternative strategies of categorization. In [6]. [9] E. Smith & S. Sloman (1994). Similarity- vs. rule-based categorization. Mem. & Cog. 22,377. [10] J. Tenenbaum (1996). Learning the structure of similarity. NIPS 8. [11] J. Tenenbaum (1999). A Bayesian Framework/or Concept Learning. Ph. D. Thesis, MIT. [12] J. Tenenbaum (1999). Bayesian modeling of human concept learning. NIPS I I. Rules and Similarity in Concept Learning (a) Average generalization judgments: Class I o.g ~II~"~IIII I II X = 1: • Class II o.g [111~1I1 I. I 11 X=168264 I • X=1623 1920 Class III o .~ l Jill o .. I_II I • • II I. LI~~~~ ' __ ~~~ __ ~~~~ 10 20 30 40 50 60 70 80 90100 (b) Bayesian model: o.g tllllil~'1 II • X= 16 .I · .. X = 16 8 2 64 ·1 · . 1t a 0.5 o IIliI I • I • I _ ~ X= 1623 1920 · . 10 20 30 40 50 60 70 80 90 100 (c) Pure similarity model (SIM): 1 t X= 16 0 '8llhk~~lll 1 I II . J o.g f!II~u!wUIII . I II * X=168264 • J o·~f"I •• 11 J I • I. X=1623 1920 10 20 30 40 50 60 70 80 90 100 (d) Pure rule model (MIN): 1f .L 0.5 a .... * •.... X= 16 · . X=168264 .I * 1f .I. 0.5 a.... . ... l!Ett* X=1623 1920 10 20 30 40 50 60 70 80 90100 65 X=60 II I IIlmll,' III I I 1~ 1 X=60 80 1030 0.5 o •• III I. ! II ... 1._*_111 !. 1 X=60 52 57 55 0.5 I •• 11111*11111 • I **'* 1 f o ... 10 20 30 40 50 60 70 80 90100 1 X=60 o.g II L II I I •• " •• 1.1111 I I 1 X=6080103O o.g .. I. I. I II ... L1J .. I. I 1 r X = 60 52 57 55 o.g II. .. . II uljl. ... .. ... 10 20 30 40 50 60 70 80 90 100 1 X=60 o.g IlL III hlllll ••• 111 II 1t X=60 80 10 30 o.g [ "1L I.! I .. 1I1.1~IJIl ! d 1 r X = 60 52 57 55 o.g II. II I 11111.1111 I. 1 0.5 10 20 30 40 50 60 70 80 90 100 X = 60 o ••• ........ 1..... .. o.g 1.1. I. III 1I.1..l.1~1~ro 10 30 1 ~ X=60525755 o.g .1. .1 ... Jl. "' .. *** 10 20 30 40 50 60 70 80 90 100 Figure 1: Data and model predictions for the number concept task. (a) Average generalization judgments: Training examples: I 3 subordmate 3 basic 3 superordinate 1 I. r 1 I r 1 I. r 1 0 .5~ .5~ .5~ .5 (b) Bayesian model: o·~~·~~·~~·~UL (c) Pure similarity model (SIM): o·~IIL·~IIL·~IIL·~UL (d) Pure rule model (MIN): o·~~·~~·~~·~UL Figure 2: Data and model predictions for the word learning task.
|
1999
|
80
|
1,732
|
Support Vector Method for Novelty Detection Bernhard Scholkopf*, Robert Williamson§, Alex Smola§, John Shawe-Taylort, John Platt* * Microsoft Research Ltd., 1 Guildhall Street, Cambridge, UK § Department of Engineering, Australian National University, Canberra 0200 t Royal Holloway, University of London, Egham, UK * Microsoft, 1 Microsoft Way, Redmond, WA, USA bsc/jplatt@microsoft.com, Bob.WilliamsoniAlex.Smola@anu.edu.au, john@dcs.rhbnc.ac.uk Abstract Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S equals some a priori specified l/ between 0 and 1. We propose a method to approach this problem by trying to estimate a function f which is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. We provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabelled data. 1 INTRODUCTION During recent years, a new set of kernel techniques for supervised learning has been developed [8]. Specifically, support vector (SV) algorithms for pattern recognition, regression estimation and solution of inverse problems have received considerable attention. There have been a few attempts to transfer the idea of using kernels to compute inner products in feature spaces to the domain of unsupervised learning. The problems in that domain are, however, less precisely specified. Generally, they can be characterized as estimating junctions of the data which tell you something interesting about the underlying distributions. For instance, kernel PCA can be characterized as computing functions which on the training data produce unit variance outputs while having minimum norm in feature space [4]. Another kernel-based unsupervised learning technique, regularized principal manifolds [6], computes functions which give a mapping onto a lower-dimensional manifold minimizing a regularized quantization error. Clustering algorithms are further examples of unsupervised learning techniques which can be kernelized [4]. An extreme point of view is that unsupervised learning is about estimating densities. Clearly, knowledge of the density of P would then allow us to solve whatever problem can be solved on the basis of the data. The present work addresses an easier problem: it Support Vector Method for Novelty Detection 583 proposes an algorithm which computes a binary function which is supposed to capture regions in input space where the probability density lives (its support), i.e. a function such that most of the data will live in the region where the function is nonzero [5]. In doing so, it is in line with Vapnik's principle never to solve a problem which is more general than the one we actually need to solve. Moreover, it is applicable also in cases where the density of the data's distribution is not even well-defined, e.g. if there are singular components. Part of the motivation for the present work was the paper [1]. It turns out that there is a considerable amount of prior work in the statistical literature; for a discussion, cf. the full version of the present paper [3]. 2 ALGORITHMS We first introduce terminology and notation conventions. We consider training data Xl, ... , Xl E X, where fEN is the number of observations, and X is some set. For simplicity, we think of it as a compact subset of liN. Let ~ be a feature map X -t F, i.e. a map into a dot product space F such that the dot product in the image of ~ can be computed by evaluating some simple kernel [8] k(x, y) = (~(x) . ~(y)), (1) such as the Gaussian kernel (2) Indices i and j are understood to range over 1, ... ,f (in compact notation: 't, J E [fD. Bold face greek letters denote f-dimensional vectors whose components are labelled using normal face typeset. In the remainder of this section, we shall develop an algorithm which returns a function f that takes the value + 1 in a "small" region capturing most of the data points, and -1 elsewhere. Our strategy is to map the data into the feature space corresponding to the kernel, and to separate them from the origin with maximum margin. For a new point X, the value f(x) is determined by evaluating which side of the hyperplane it falls on, in feature space. Via the freedom to utilize different types of kernel functions, this simple geometric picture corresponds to a variety of nonlinear estimators in input space. To separate the data set from the origin, we solve the following quadratic program: min ~llwl12 + ;l Li ei - P wEF,eEiRt,PEiR (3) subject to (w· ~(Xi)) 2:: P - ei, ei 2:: o. (4) Here, 1/ E (0, 1) is a parameter whose meaning will become clear later. Since nonzero slack variables ei are penalized in the objective function, we can expect that if wand p solve this problem, then the decision function f(x) = sgn((w . ~(x)) - p) will be positive for most examples Xi contained in the training set, while the SV type regularization term Ilwll will still be small. The actual trade-off between these two goals is controlled by 1/. Deriving the dual problem, and using (1), the solution can be shown to have an SV expansion f(x) = 'gn ( ~ a;k(x;, x) - p) (5) (patterns Xi with nonzero ll!i are called SVs), where the coefficients are found as the solution of the dual problem: main ~ L ll!ill!jk(Xi, Xj) subject to 0 ~ ll!i ~ :f' L ll!i = 1. (6) ij 584 B. ScMlkop/, R. C. Williamson, A. J Smola, J Shawe-Taylor and J C. Platt This problem can be solved with standard QP routines. It does, however, possess features that sets it apart from generic QPs, most notably the simplicity of the constraints. This can be exploited by applying a variant of SMO developed for this purpose [3]. The offset p can be recovered by exploiting that for any ll:i which is not at the upper or lower bound, the corresponding pattern Xi satisfies p = (w . <.P(Xi)) = L:j ll:jk(Xj, Xi). Note that if v approaches 0, the upper boundaries on the Lagrange multipliers tend to infinity, i.e. the second inequality constraint in (6) becomes void. The problem then resembles the corresponding hard margin algorithm, since the penalization of errors becomes infinite, as can be seen from the primal objective function (3). It can be shown that if the data set is separable from the origin, then this algorithm will find the unique supporting hyperplane with the properties that it separates all data from the origin, and its distance to the origin is maximal among all such hyperplanes [3]. If, on the other hand, v approaches I, then the constraints alone only allow one solution, that where all ll:i are at the upper bound 1/ (v£). In this case, for kernels with integral I, such as normalized versions of (2), the decision function corresponds to a thresholded Parzen windows estimator. To conclude this section, we note that one can also use balls to describe the data in feature space, close in spirit to the algorithms of [2], with hard boundaries, and [7], with "soft margins." For certain classes of kernels, such as Gaussian RBF ones, the corresponding algorithm can be shown to be equivalent to the above one [3]. 3 THEORY In this section, we show that the parameter v characterizes the fractions of SVs and outliers (Proposition 1). Following that, we state a robustness result for the soft margin (Proposition 2) and error bounds (Theorem 5). Further results and proofs are reported in the full version of the present paper [3]. We will use italic letters to denote the feature space images of the corresponding patterns in input space, i.e. Xi := <.P(Xi). Proposition 1 Assume the solution of (4) satisfies p =1= 0. The following statements hold: (i) v is an upper bound on the fraction of outliers. (ii) v is a lower bound on the fraction of SVs. (iii) Suppose the data were generated independently from a distribution P(x) which does not contain discrete components. Suppose, moreover, that the kernel is analytic and nonconstant. With probability 1, asymptotically, v equals both the fraction of SVs and the fraction of outliers. The proof is based on the constraints of the dual problem, using the fact that outliers must have Lagrange multipliers at the upper bound. Proposition 2 Local movements of outliers parallel to w do not change the hyperplane. We now move on to the subject of generalization. Our goal is to bound the probability that a novel point drawn from the same underlying distribution lies outside of the estimated region by a certain margin. We start by introducing a common tool for measuring the capacity of a class :r of functions that map X to lit Definition 3 Let (X, d) be a pseudo-metric space, I let A be a subset of X and f. > 0. A set B ~ X is an f.-cover for A if, for every a E A, there exists b E B such that d( a, b) ::::; f.. The f.-covering number of A, Nd(f., A), is the minimal cardinality of an f.-cover for A (if there is no such finite cover then it is defined to be 00). I i.e. with a distance function that differs from a metric in that it is only semidefinite Support Vector Method for Novelty Detection 585 The idea is that B should be finite but approximate all of A with respect to the pseudometric d. We will use the loo distance over a finite sample X = (Xl, .. • , Xl) for the pseudometric in the space of functions, dx(f, g) = m~E[lllf(xd - g(xi)l. Let N(E,~, f) = SUPXEXI Ndx (E, ~). Below, logarithms are to base 2. Theorem 4 Consider any distribution P on X and any 0 E lR. Suppose Xl, •.. ,Xl are generated U.d. from P. Then with probability 1 - 6 over such an f-sample, if we find f E ~ such that f(Xi) ~ 0 +, for all i E [f), P{x : f(x) < 0 -,} :s; t(k + log 2(l), where k = rlog:Nb,~, 2f)1We now consider the possibility that for a small number of points f(Xi) fails to exceed 0+,. This corresponds to having a non-zero slack variable ~i in the algorithm, where we take 0 + , = p / II w II and use the class of linear functions in feature space in the application of the theorem. There are well-known bounds for the log covering numbers of this class. Let f be a real valued function on a space X. Fix 0 E lR. For X E X, define d(x,J, ,) = max{O,O +, -f(x)}. Similarly for a training sequence X, we define 'D(X, f, ,) = L:xEX d(x, f, ,). Theorem 5 Fix 0 E lR. Consider a fixed but unknown probability distribution P on the input space X and a class of real valued functions ~ with range [a, b). Then with probability 1 - 6 over randomly drawn training sequences X of size f, for all, > 0 and any f E ~, P {x: f(x) < 0 -, and X ~ X} :s; t(k + log ~l ), where k = rlogN('V/2 ~ U) + 64(b-a)'D(X,J,'y) log ( ell ) log (32l(b-a)2)1. I , , -y2 8'D(X,J,-y) -y2 The theorem bounds the probability of a new point falling in the region for which f(x) has value less than 0 - " this being the complement of the estimate for the support of the distribution. The choice of, gives a trade-off between the size of the region over which the bound holds (increasing, increases the size of the region) and the size of the probability with which it holds (increasing, decreases the size of the log covering numbers). The result shows that we can bound the probability of points falling outside the region of estimated support by a quantity involving the ratio of the log covering numbers (which can be bounded by the fat shattering dimension at scale proportional to ,) and the number of training examples, plus a factor involving the I-norm of the slack variables. It is stronger than related results given by [I], since their bound involves the square root of the ratio of the Pollard dimension (the fat shattering dimension when, tends to 0) and the number of training examples. The output of the algorithm described in Sec. 2 is a function f(x) = 2:::i aik(xi' x) which is greater than or equal to p ~i on example Xi. Though non-linear in the input space, this function is in fact linear in the feature space defined by the kernel k. At the same time the 2-norm of the weight vector is given by B = J aT K a, and so we can apply the theorem with the function class ~ being those linear functions in the feature space with 2-norm bounded by B . If we assume that 0 is fixed, then, = p - 0, hence the support of the distribution is the set {x : f (x) ~ 0 - , = 20 - p}, and the bound gives the probability of a randomly generated point falling outside this set, in terms of the log covering numbers of the function class ~ and the sum of the slack variables ~i. Since the log covering numbers 586 B. SchOlkopj R. C. Williamson, A. 1. Smola, 1. Shawe-Taylor and 1. C. Platt at scale, /2 of the class ~ can be bounded by O( B:!F-Iog2 f) this gives a bound in terms "Y of the 2-norm of the weight vector. Ideally, one would like to allow () to be chosen after the value of p has been determined, perhaps as a fixed fraction of that value. This could be obtained by another level of structural risk minimisation over the possible values of p or at least a mesh of some possible values. This result is beyond the scope of the current preliminary paper, but the form of the result would be similar to Theorem 5, with larger constants and log factors. Whilst it is premature to give specific theoretical recommendations for practical use yet, one thing is clear from the above bound. To generalize to novel data, the decision function to be used should employ a threshold TJ • p, where TJ < 1 (this corresponds to a nonzero I)' 4 EXPERIMENTS We apply the method to artificial and real-world data. Figure 1 displays 2-D toy examples, and shows how the parameter settings influence the solution. Next, we describe an experiment on the USPS dataset of handwritten digits. The database contains 9298 digit images of size 16 x 16 = 256; the last 2007 constitute the test set. We trained the algorithm, using a Gaussian kernel (2) of width c = 0.5 . 256 (a common value for SVM classifiers on that data set, cf. [2]), on the test set and used it to identify outliers it is folklore in the community that the USPS test set contains a number of patterns which are hard or impossible to classify, due to segmentation errors or mislabelling. In the experiment, we augmented the input patterns by ten extra dimensions corresponding to the class labels of the digits. The rationale for this is that if we disregarded the labels, there would be no hope to identify mislabelled patterns as outliers. Fig. 2 shows the 20 worst outliers for the USPS test set. Note that the algorithm indeed extracts patterns which are very hard to assign to their respective classes. In the experiment, which took 36 seconds on a Pentium II running at 450 MHz, we used a 11 value of 5%. Figure I: First two pictures: A single-class SVM applied to two toy problems; 11 = C = 0.5, domain: [-1, 1 F. Note how in both cases, at least a fraction of 11 of all examples is in the estimated region (cf. table). The large value of 11 causes the additional data points in the upper left comer to have almost no influence on the decision function. For smaller values of 11, such as 0.1 (third picture), the points cannot be ignored anymore. Alternatively, one can force the algorithm to take these 'outliers' into account by changing the kernel width (2): in the fourth picture, using c = 0.1,11 = 0.5, the data is effectively analyzed on a different length scale which leads the algorithm to consider the outliers as meaningful points. Support Vector Method/or Novelty Detection 587 ~"1r~ftC 9 -507 1 -4580 -377 1 -282 7 -2162 -2003 -1869 -179 5 ~"'J.. ~()nl~) -153 3 -143 6 -1286 3 0 -1177 -93 5 -78 0 -58 7 -52 6 -48 3 Figure 2: Outliers identified by the proposed algorithm, ranked by the negative output of the SVM (the argument of the sgn in the decision function). The outputs (for convenience in units of 10-5) are written underneath each image in italics, the (alleged) class labels are given in bold face. Note that most of the examples are "difficult" in that they are either atypical or even mislabelled. 5 DISCUSSION One could view the present work as an attempt to provide an algorithm which is in line with Vapnik's principle never to solve a problem which is more general than the one that one is actually interested in. E.g., in situations where one is only interested in detecting novelty, it is not always necessary to estimate a full density model of the data. Indeed, density estimation is more difficult than what we are doing, in several respects. Mathematically speaking, a density will only exist if the underlying probability measure possesses an absolutely continuous distribution function. The general problem of estimating the measure for a large class of sets, say the sets measureable in Borel's sense, is not solvable (for a discussion, see e.g. [8]). Therefore we need to restrict ourselves to making a statement about the measure of some sets. Given a small class of sets, the simplest estimator accomplishing this task is the empirical measure, which simply looks at how many training points fall into the region of interest. Our algorithm does the opposite. It starts with the number of training points that are supposed to fall into the region, and then estimates a region with the desired property. Often, there will be many such regions the solution becomes unique only by applying a regularizer, which in our case enforces that the region be small in a feature space associated to the kernel. This, of course, implies, that the measure of smallness in this sense depends on the kernel used, in a way that is no different to any other method that regularizes in a feature space. A similar problem, however, appears in density estimation already when done in input space. Let p denote a density on X. If we perform a (nonlinear) coordinate transformation in the input domain X, then the density values will change; loosely speaking, what remains constant is p(x) . dx, while dx is transformed, too. When directly estimating the probability measure of regions, we are not faced with this problem, as the regions automatically change accordingly. An attractive property of the measure of smallness that we chose to use is that it can also be placed in the context of regularization theory, leading to an interpretation of the solution as maximally smooth in a sense which depends on the specific kernel used [3]. The main inspiration for our approach stems from the earliest work of Vapnik and collaborators. They proposed an algorithm for characterizing a set of unlabelled data points by separating it from the origin using a hyperplane [9]. However, they quickly moved on to two-class classification problems, both in terms of algorithms and in the theoretical development of statistical learning theory which originated in those days. From an algorithmic point of view, we can identify two shortcomings of the original approach which may have caused research in this direction to stop for more than three decades. Firstly, the original 588 B. Scholkopf, R. C. Williamson, A. J Smola, J Shawe-Taylor and J C. Platt algorithm in was limited to linear decision rules in input space, secondly, there was no way of dealing with outliers. In conjunction, these restrictions are indeed severe a generic dataset need not be separable from the origin by a hyperplane in input space. The two modifications that we have incorporated dispose of these shortcomings. Firstly, the kernel trick allows for a much larger class of functions by nonlinearly mapping into a high-dimensional feature space, and thereby increases the chances of separability from the origin. In particular, using a Gaussian kernel (2), such a separation exists for any data set Xl, ... , Xl: to see this, note that k(Xi, Xj) > 0 for all i, j, thus all dot products are positive, implying that all mapped patterns lie inside the same orthant. Moreover, since k(Xi, Xi) = 1 for all i, they have unit length. Hence they are separable from the origin. The second modification allows for the possibility of outliers. We have incorporated this 'softness' of the decision rule using the v-trick and thus obtained a direct handle on the fraction of outliers. We believe that our approach, proposing a concrete algorithm with well-behaved computational complexity (convex quadratic programming) for a problem that so far has mainly been studied from a theoretical point of view has abundant practical applications. To turn the algorithm into an easy-to-use black-box method for practicioners, questions like the selection of kernel parameters (such as the width of a Gaussian kernel) have to be tackled. It is our expectation that the theoretical results which we have briefly outlined in this paper will provide a foundation for this formidable task. Acknowledgement. Part of this work was supported by the ARC and the DFG (# Ja 37919-1), and done while BS was at the Australian National University and GMD FIRST. AS is supported by a grant of the Deutsche Forschungsgemeinschaft (Sm 62/1-1). Thanks to S. Ben-David, C. Bishop, C. Schnorr, and M. Tipping for helpful discussions. References [1] S. Ben-David and M. Lindenbaum. Learning distributions by their density levels: A paradigm for learning without a teacher. Journal of Computer and System Sciences, 55:171-182,1997. [2] B. SchOlkopf, C. Burges, and V. Vapnik. Extracting support data for a given task. In U. M. Fayyad and R. Uthurusamy, editors, Proceedings, First International Conference on Knowledge Discovery & Data Mining. AAAI Press, Menlo Park, CA, 1995. [3] B. SchOlkopf, J. Platt, J. Shawe-Taylor, AJ. Smola, and R.c. Williamson. Estimating the support of a high-dimensional distribution. TR MSR 99 - 87, Microsoft Research, Redmond, WA, 1999. [4] B. Scholkopf, A. Smola, and K.-R. Muller. Kernel principal component analysis. In B. SchOlkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods Support Vector Learning. MIT Press, Cambridge, MA, 1999. 327 - 352. [5] B. SchOlkopf, R. Williamson, A. Smola, and J. Shawe-Taylor. Single-class support vector machines. In J. Buhmann, W. Maass, H. Ritter, and N. Tishby, editors, Unsupervised Learning, Dagstuhl-Seminar-Report 235, pages 19 - 20, 1999. [6] A. Smola, R. C. Williamson, S. Mika, and B. Scholkopf. Regularized principal manifolds. In Computational Learning Theory: 4th European Conference, volume 1572 of Lecture Notes in Artificial Intelligence, pages 214 - 229. Springer, 1999. [7] D.MJ. Tax and R.P.W. Duin. Data domain description by support vectors. In M. Verleysen, editor, Proceedings ESANN, pages 251 - 256, Brussels, 1999. D Facto. [8] V. Vapnik. Statistical Learning Theory. Wiley, New York, 1998. [9] V. Vapnik and A. Lerner. Pattern recognition using generalized portraits. Avtomatika i Telemekhanika, 24:774 -780, 1963.
|
1999
|
81
|
1,733
|
Generalized Model Selection For Unsupervised Learning In High Dimensions Shivakumar Vaithyanathan IBM Almaden Research Center 650 Harry Road San Jose, CA 95136 Shiv@almaden.ibm.com Byron Dom IBM Almaden Research Center 650 Harry Road San Jose, CA 95136 dom@almaden.ibm.com Abstract We describe a Bayesian approach to model selection in unsupervised learning that determines both the feature set and the number of clusters. We then evaluate this scheme (based on marginal likelihood) and one based on cross-validated likelihood. For the Bayesian scheme we derive a closed-form solution of the marginal likelihood by assuming appropriate forms of the likelihood function and prior. Extensive experiments compare these approaches and all results are verified by comparison against ground truth. In these experiments the Bayesian scheme using our objective function gave better results than cross-validation. 1 Introduction Recent efforts define the model selection problem as one of estimating the number of clusters[ 10, 17]. It is easy to see, particularly in applications with large number of features, that various choices of feature subsets will reveal different structures underlying the data. It is our contention that this interplay between the feature subset and the number of clusters is essential to provide appropriate views of the data.We thus define the problem of model selection in clustering as selecting both the number of clusters and the feature subset. Towards this end we propose a unified objective function whose arguments include the both the feature space and number of clusters. We then describe two approaches to model selection using this objective function. The first approach is based on a Bayesian scheme using the marginal likelihood for model selection. The second approach is based on a scheme using cross-validated likelihood. In section 3 we apply these approaches to document clustering by making assumptions about the document generation model. Further, for the Bayesian approach we derive a closed-form solution for the marginal likelihood using this document generation model. We also describe a heuristic for initial feature selection based on the distributional clustering of terms. Section 5 describes the experiments and our approach to validate the proposed models and algorithms. Section 6 reports and discusses the results of our experiments and finally section 7 provides directions for future work. Model Selection for Unsupervised Learning in High Dimensions 971 2 Model selection in clustering Model selection approaches in clustering have primarily concentrated on determining the number of components/clusters. These attempts include Bayesian approaches [7,10], MDL approaches [15] and cross-validation techniques [17]. As noticed in [17] however, the optimal number of clusters is dependent on the feature space in which the clustering is performed. Related work has been described in [7]. 2.1 A generalized model for clustering Let D be a data-set consisting of "patterns" {d I, .. , d v }, which we assume to be represented in some feature space T with dimension M. The particular problem we address is that of clustering D into groups such that its likelihood described by a probability model p(DTIQ), is maximized, where DT indicates the representation of D in feature space T and Q is the structure of the model, which consists of the number of clusters, the partitioning of the feature set (explained below) and the assignment of patterns to clusters. This model is a weighted sum of models {P(DTIQ, ~)I~ E [Rm} where ~ is the set of all parameters associated with Q . To define our model we begin by assuming that the feature space T consists of two sets: U - useful features and N noise features. Our feature-selection problem will thus consist of partitioning T (into U and N) for a given number of clusters. Assumption 1 The feature sets represented by U and N are conditionally independent p(DTIQ,~) = P(DN I Q, ~) P(Du I Q,~) (1) where DN indicates data represented in the noise feature space and DU indicates data represented in useful feature space. Using assumption 1 and assuming that the data is independently drawn, we can rewrite equation (1) as p(DTIQ,~) = {n p(d~ I ~N). nn p(dy I ~f)} (2) 1=1 k=IJED, where V is the number of patterns in D, p(dy I ~u) is the probability of dy given the parameter vector ~f and p(d~ I ~N) is the probability of d~ given the parameter vector ~N . Note that while the explicit dependence on Q has been removed in this notation, it is implicit in the number of clusters K and the partition of T into Nand U. 2.2 Bayesian approach to model selection The objective function, represented in equation (2) is not regularized and attempts to optimize it directly may result in the set N becoming empty - resulting in overfitting. To overcome this problem we use the marginallikelihood[2]. K Assumption 2 All parameter vectors are independent. n (~) = n (~N). n n (~f) k=1 where the n( ... ) denotes a Bayesian prior distribution. The marginal likelihood, using assumption 2, can be written as P(DT I Q)= IN [UP(d~ I ~N)]n(~N)d~N. DL [!lp(dY I ~f)]n(~f)d~f(3) 972 S. Vaithyanathan and B. Dom where SN, SV are integral limits appropriate to the particular parameter spaces. These will be omitted to simplify the notation. 3.0 Document clustering Document clustering algorithms typically start by representing the document as a "bag-of-words" in which the features can number - 104to 105 . Ad-hoc dimensionality reduction techniques such as stop-word removal, frequency based truncations [16] and techniques such as LSI [5] are available. Once the dimensionality has been reduced, the documents are usually clustered into an arbitrary number of clusters. 3.1 Multinomial models Several models of text generation have been studied[3]. Our choice is multinomial models using term counts as the features. This choice introduces another parameter indicating the probability of the Nand U split. This is equivalent to assuming a generation model where for each document the number of noise and useful terms are determined by a probability (}s and then the terms in a document are "drawn" with a probability ((}n or ()~ ). 3.2 Marginal likelihood / stochastic complexity To apply our Bayesian objective function we begin by substituting multinomial models into (3) and simplifying to obtain P(D I Q) = (tN ;,tV ) S[((}S)tN (1- (}s)tu]n((}s)d(}S . [Ii: II ({t. tIYEU}J] S[II((}k)t,.u]n((}f)d(}f · (4) k==1 IEDk I,U U UEV [iI ( tf J 1 S[II((}n)ti .• ] n((}N) d(}N j=1 {tj,nlnEN} nEN where ( (.'.\) is the multinomial coefficient, ti,u is the number of occurrences of the feature term u in document i, tYis the total number of all useful features (terms) in document i (tY = L U ti,u, t~ :, and ti,n are to be interpreted similar to above but for noise features, (n = k l(~~k) ! , tNis the total number of all noise features in all patterns and tVis the total number of all useful features in all patterns. To solve (4) we still need a form for the priors {n( ... )}. The Beta family is conjugate to the Binomial family [2] and we choose the Dirichlet distribution (mUltiple Beta) as the form for both n((}f) and n((}N) and the Beta distribution for n((}s). Substituting these into equation (8) and simplifying yields P (D I Q) = [ f(Ya + Yb) f(tN + Ya)f(tV + Yb) ] • [ f(/J) II f(/Jn + tn) ] f(Ya)f(Yb) [(tV + tN + Ya + Yb) f(/J + tN) nEN f(/Jn) [ [(0') K r(O'k + IDkl)] [K f(a) [(au + tV ] [(0'+ v) D f(lD kl) • D f(a+ tU(k) Du f(a u) (5) Model Selection for Unsupervised Learning in High Dimensions 973 where f3, and au are the hyper-parameters of the Dirichlet prior for noise and useful features respectively, f3 = L f3n , a = L au, U = L ukand ro is the "gamma" function. neN UEU k=1 Further, Yu, Yure the hyper parameters of the Beta prior for the split probability, IDkl is the number of documents in cluster k and tU(k is computed as L tf. The results iEDk reported for our evaluation will be the negative of the log of equation (5), which (following Rissanen [14]) we refer to as Stochastic Complexity (SC). In our experiments all values of the hyper-parameters pj ,aj (Jk> Ya and Y bare set equal to 1 yielding uniform priors. 3.3 Cross-Validated likelihood To compute the cross validated likelihood using multinomial models we first substitute the multinomial functional forms, using the MLE found using the training set. This results in the following equation { ,......., N ,.....,., U VIt!\1 ,.....,.., K ~ P(CVT I QP) = [(05)t" .. (1- ( 5)1,,] IT p(evf ION) . IT IT peevy I O~i)' p(q) (6) 1=1 k=IJEDk ,..., --,..., where Os, ON and O~i) are the MLE of the appropriate parameter vectors. For our implementation of MCCV, following the suggestion in [17], we have used a 50% split of the training and test set. For the vCV criterion although a value of v = 10 was suggested therein, for computational reasons we have used a value of v = 5. 3.4 Feature subset selection algorithm for document clustering As noted in section 2.1, for a feature-set of size M there are a total of 2M partitions and for large M it would be computationally intractable to search through all possible partitions to find the optimal subset. In this section we propose a heuristic method to obtain a subset of tokens that are topical (indicative of underlying topics) and can be used as features in the bag-of-words model to cluster documents. 3.4.1 Distributional clustering for feature subset selection Identifying content-bearing and topical terms, is an active research area [9]. We are less concerned with modeling the exact distributions of individual terms as we are with simply identifying groups of terms that are topical. Distributional clustering (DC), apparently first proposed by Pereira et al [13], has been used for feature selection in supervised text classification [1] and clustering images in video sequences [9]. We hypothesize that function, content-bearing and topical terms have different distributions over the documents. DC helps reduce the size of the search space for feature selection from 2M to 2e, where C is the number of clusters produced by the DC algorithm. Following the suggestions in [9], we compute the following histogram for each token. The first bin consists of the number of documents with zero occurrences of the token, the second bin is the number of documents consisting of a single occurrence of the token and the third bin is the number of documents that contain more two or more occurrences of the term. The histograms are clustered using reLative entropy ~(. II .) as 974 S. Vaithyanathan and B. Dom a distance measure. For two terms with probability distributions PI (.) and P2(.), this is given by [4]: '" PI(t) ,1.(Pt(t) II P2(t)) = k PI(t) log P2(t) t (7) We use a k-means-style algorithm in which the histograms are normalized to sum to one and the sum in equation (7) is taken over the three bins corresponding to counts of 0,1, and ~ 2. During the assignment-to-clusters step of k-means we compute ,1.(pw II PCk) (where pw is the normalized histogram for term wand Pq(t) is the centroid of cluster k) and the term w is assigned to the cluster for which this is minimum [13,8]. 4.0 Experimental setup Our evaluation experiments compared the clustering results against human-labeled ground truth. The corpus used was the AP Reuters Newswire articles from the TREC-6 collection. A total of 8235 documents, from the routing track, existing in 25 classes were analyzed in our experiments. To simplify matters we disregarded multiple assignments and retained each document as a member of a single class. 4.1 Mutual information as an evaluation measure of clustering We verify our models by comparing our clustering results against pre-classified text. We force all clustering algorithms to produce exactly as many clusters as there are classes in the pre-classified text and we report the mutual information[ 4] (MI) between the cluster labels and pre-classified class labels 5.0 Results and discussions After tokenizing the documents and discarding terms that appeared in less than 3 documents we were left with 32450 unique terms. We experimented with several numbers of clusters for DC but report only the best (lowest SC) for lack of space. For each of these clusters we chose the best of 20 runs corresponding to different random starting clusters. Each of these sets includes one cluster that consists of high-frequency words and upon examination were found to contain primarily function words, which we eliminated from further consideration. The remaining non-function-word clusters were used as feature sets for the clustering algorithm. Only combinations of feature sets that produced good results were used for further document clustering runs. We initialized the EM algorithm using k-means algorithm - other initialization schemes are discussed in [11]. The feature vectors used in this k -means initialization were generated using the pivoted normal weighting suggested in [16]. All parameter vectors Of and eN were estimated using Laplace's Rule of Succession[2]. Table 1 shows the best results of the SC criterion, the vCV and MCCV using the feature subsets selected by the different combinations of distributional clusters. The feature subsets are coded as FSXP where X indicates the number of clusters in the distributional clustering and P indicates the cluster number(s) used as U. For SC and MI all results reported are averages over 3 runs of the k-means+EM combination with different initialization fo k-means. For clarity, the MI numbers reported are normalized such that the theoretical maximum is 1.0. We also show comparisons against no feature selection (NF) and LSI. Model Selection for Unsupervised Learning in High Dimensions 975 For LSI, the principal 165 eigenvectors were retained and k-means clustering was performed in the reduced dimensional space. While determining the number of clusters, for computational reasons we have limited our evaluation to only the feature subset that provided us with the highest MI, i.e., FS41-3. Feature Useful SC vCV MCCV Ml Set Features X 107 X 107 X 107 FS41-3 6,157 2.66 0.61 1.32 0.61 FS52 386 2.8 0.3 0.69 0.51 NF 32,450 2.96 1.25 2.8 0.58 LSI 324501165 NA NA NA 0.57 Table 1 Comparison Of Results Figure 1 1~1 • . ..... .: I . . . • .. , .. " '" .. , .. .. . " , Flgur.2 1~1 . ~ .. " . I • • ....,.. ..... Log .... IQ ..... L ... hood .. ... " ThIwI:_P'!Ch MCCV· .... gII ..... l avl ... ....-ood 5.3 Discussion The consistency between the MI and SC (Figure 1) is striking. The monotonic trend is more apparent at higher SC indicating that bad clusterings are more easily detected by SC while as the solution improves the differences are more subtle. Note that the best value of SC and Ml coincide. Given the assumptions made in deriving equation (5), this consistency and is encouraging. The interested reader is referred to [18] for more details. Figures 2 and 3 indicate that there is certainly a reasonable consistency between the cross-validated likelihood and the MI although not as striking as the SC. Note that the MI for the feature sets picked by MCCV and vCV is significantly lower than that of the best feature-set. Figures 4,5 and 6 show the plots of SC, MCCV and vCV as the number of clusters is increased. Using SC we see that FS41-3 reveals an optimal structure around 40 clusters. As with feature selection, both MCCV and vCV obtain models of lower complexity than Sc. Both show an optimum of about 30 clusters. More experiments are required before we draw final conclusions, however, the full Bayesian approach seems a practical and useful approach for model selection in document clustering. Our choice of likelihood function and priors provide a closed-form solution that is computationally tractable and provides meaningful results. 6.0 Conclusions In this paper we tackled the problem of model structure determination in clustering. The main contribution of the paper is a Bayesian objective function that treats optimal model selection as choosing both the number of clusters and the feature subset. An important aspect of our work is a formal notion that forms a basis for doing feature selection in unsupervised learning. We then evaluated two approaches for model selection: one using this objective function and the other based on cross-validation. 976 S. Vaithyanathan and B. Dom Both approaches performed reasonably well - with the Bayesian scheme outperforming the cross-validation approaches in feature selection. More experiments using different parameter settings for the cross-validation schemes and different priors for the Bayesian scheme should result in better understanding and therefore more powerful applications of these approaches. :: I . """. te. I ! 1: '--" ____ ------' HI " " . References t t. I WCCV · ,..,._lIogLDiIfIood Xl01 ....... • • • . _oliO-. .. • . . I . Fig .... I la~ 1-'-____ _ ----1 ..... r:: ~.. .• ..• .. • -+ ••. ' , tl. " • ~ •• ",._ __ 0.-. Flgur,' ~ 1 ... I OM " 1=': ~ ... • • OM . __ "0..-. [I] Baker, D., et aI, Distributional Clustering of Words for Text Classification, SIGIR 1998. [2] Bernardo, J. M. and Smith, A. F. M., Bayesian Theory, Wiley, 1994. [3] Church, K.W. et aI, Poisson Mixtures. Natural Language Engineering. 1(12), 1995. [4] Cover, T.M. and Thomas, J.A. Elements of Information Theory. Wiley-Interscience, 1991. [5] Deerwester,S. et aI, Indexing by Latent Semantic Analysis,JASIS, 1990. [6] Dempster, A.et aI., Maximum Likelihood from Incomplete Data Via the EM Algorithm. JRSS, 39,1977. [7] Hanson,R., et aI, Bayesian Classification with Correlation and Inheritance, IJCAI,1991 . [8] Iyengar, G., Clustering images using relative entropy for efficient retrieval, VLBV, 1998. [9] Katz, S.M. , Distribution of content words and phrases in text and language modeling, NLE, 2,1996. [10] Kontkanen, P.T. et ai, Comparing Bayesian Model Class Selection Criteria by Discrete Finite Mixtures, ISIS'96 Conference, 1996. [II] Meila, M., Heckerman, D., An Experimental Comparison of Several Clustering and Initialization Methods, MSR-TR-98-06. [12] Nigam, K et aI, Learning to Classify Text from Labeled and Unlabeled Documents, AAAI, 1998. [13] Pereira, F.C.N. et ai, Distributional clustering of English words, ACL,1993. [14] Rissanen, J., Stochastic Complexity in Statistical Inquiry. World\ Scientific, 1989. [15] Rissanen, J., Ristad E., Unsupervised classification with stochastic complexity." The US/Japan Conference on the Frontiers of Statistical Modeling,1992. [16] Singhal A. et aI, Pivoted Document Length Normalization, SIGIR, 1996. [17] Smyth, P., Clustering using Monte Carlo cross-validation, KDD, 1996. [18] Vaithyanathan, S. and Dom, B. Model Selection in Unsupervised Learning with Applications to Document Clustering. IBM Research Report RJ-I 0137 (95012) Dec. 14, 1998 .
|
1999
|
82
|
1,734
|
An Improved Decomposition Algorithm for Regression Support Vector Machines Pavel Laskov Department of Computer and Information Sciences University of Delaware Newark, DE 19718 laskov@asel. udel. edu Abstract A new decomposition algorithm for training regression Support Vector Machines (SVM) is presented. The algorithm builds on the basic principles of decomposition proposed by Osuna et. al., and addresses the issue of optimal working set selection. The new criteria for testing optimality of a working set are derived. Based on these criteria, the principle of "maximal inconsistency" is proposed to form (approximately) optimal working sets. Experimental results show superior performance of the new algorithm in comparison with traditional training of regression SVM without decomposition. Similar results have been previously reported on decomposition algorithms for pattern recognition SVM. The new algorithm is also applicable to advanced SVM formulations based on regression, such as density estimation and integral equation SVM. 1 Introd uction The increasing interest in applications of Support Vector Machines (SVM) to largescale problems ushers in new requirements for computational complexity of their training algorithms. Requests have been recently made for algorithms capable of handling problems containing 105 - 106 examples [1]. Training an SVM constitutes a quadratic programming problem, and a typical SVM package uses an off-the-shelf optimization software to obtain a solution to it. The number of variables in the optimization problem is equal to the number of training data points (for the pattern recognition SVM) or twice that number (for the regression SVM). The speed of general-purpose optimization methods is insufficient for problems containing more than a few thousand examples. This has motivated a quest for special-purpose training algorithms to take advantage of the particular structure of SVM training problems. The main avenue of research in SVM training algorithms is decomposition. The key idea of decomposition, due to Osuna et. al. [2], is to freeze all but a small number of optimization variables, and to solve a sequence of small fixed-size problems. The set of variables whose values are optimized at a current iteration is called the working set. Complexity of re-optimizing the working set is assumed to be constant-time. An Improved Decomposition Algorithm for Regression Support Vector Machines 485 In order for a decomposition algorithm to be successful) the working set must be selected in a smart way. The fastest known decomposition algorithm is due to Joachims [3]. It is based on Zoutendijk)s method of feasible directions proposed in the optimization community in the early 1960)s. However Joachims) algorithm is limited to pattern recognition SVM because it makes use of labels being ±l. The current article presents a similar algorithm for the regression SVM. The new algorithm utilizes a slightly different background from optimization theory. The Karush-Kuhn-Tucker Theorem is used to derive conditions for determining whether or not a given working set is optimal. These conditions become the algorithm)s termination criteria) as an alternative to Osuna)s criteria (also used by Joachims without modification) which used conditions for individual points. The advantage of the new conditions is that knowledge of the hyperplane)s constant factor b) which in some cases is difficult to compute) is not required. Further investigation of the new termination conditions allows to form the strategy for selecting an optimal working set. The new algorithm is applicable to the pattern recognition SVM) and is provably equivalent to Joachims) algorithm. One can also interpret the new algorithm in the sense of the method of feasible directions. Experimental results presented in the last section demonstrate superior performance of the new method in comparison with traditional training of regression SVM. 2 General Principles of Regression SVM Decomposition The original decomposition algorithm proposed for the pattern recognition SVM in [2] has been extended to the regression SVM in [4]. For the sake of completeness I will repeat the main steps of this extension with the aim of providing terse and streamlined notation to lay the ground for working set selection. Given the training data of size I) training of the regression SVM amounts to solving the following quadratic programming problem in 21 variables: Maximize W(a) subject to: eTa a-Ct < a > where -T1- TD y 0: -0: 0: 2 0 0 0 K -K (1) The basic idea of decomposition is to split the variable vector a into the working set aB of fixed size q and the non-working set aN containing the rest ofthe variables. The corresponding parts of vectors e and y will also bear subscripts Nand B . The matrix D is partitioned into D BB ) DBN = D~B and D NN . A further requirement is that) for the i-th element of the training data) both 0i and 0; are either included in or omitted from the working set.l The values of the variables in the non-working set are frozen for the iteration) and optimization is only performed with respect to the variables in the working set. Optimization of the working set is also a quadratic program. This can be seen by re-arranging the terms of the objective function and the equality constraint in IThis rule facilitates formulation of sub-problems to be solved at each iteration. 486 P. Laskov (1) and dropping the terms independent of o.B from the objective. The resulting quadratic program (sub-problem) is formulated as follows: Maximize subject to: T TCBQB +cNQN o.B - Cl -T -T 1 -T (YB QNDNB)QB 'iQBDBBQB a < 0 > 0 (2) The basic decomposition algorithm chooses the first working set at random, and proceeds iteratively by selecting sub-optimal working sets and re-optimizing them, by solving quadratic program (2), until all subsets of size q are optimal. The precise formulation of termination conditions will be developed in the following section. 3 Optimality of a Working Set In order to maintain strict improvement of the objective function, the working set must be sub-optimal before re-optimization. The classical Karush-Kuhn-Tucker (KKT) conditions are necessary and sufficient for optimality of a quadratic program. I will use these conditions applied to the standard form of a quadratic program, as described in [5], p. 36. The standard form of a quadratic program requires that all constraints are of equality type except for non-negativity constraints. To cast the refression SVM quadratic program (1) into the standard form, the slack variables s = (81 , '" ,82l) corresponding to the box constraints, and the following matrices are introduced: I 0] o I ' E = [f], z = m ' (3) where 1 is a vector of length I, C is a vector of length 21. The zero element in vector z reflects the fact that a slack variable for the equality constraint must be zero. In the matrix notation all constraints of problem (1) can be compactly expressed as: ETz f z > 0 (4) In this notation the Karush-Kuhn-Tucker Theorem can be stated as follows: Theorem 1 (Karush-Kuhn-Tucker Theorem) The primal vector z solves the quadratic problem (1) if and only if it satisfies (4) and there exists a dual vector uT = (ITT wT) = (ITT (I' yT» such that: IT = DO. + Ew - Y > 0 (5) Y ~ 0 (6) u T z = a (7) It follows from the Karush-Kuhn-Tucker Theorem that if for all u satisfying conditions (6) - (7) the system of inequalities (5) is inconsistent then the solution of problem (1) is not optimal. Since the objective function of sub-problem (2) was obtained by merely re-arranging terms in the objective function of the initial problem (1), the same conditions guarantee that the sub-problem (2) is not optimal. Thus, the main strategy for identifying sub-optimal working sets will be to enforce inconsistency of the system (5) while satisfying conditions (6) - (7). An Improved Decomposition Algorithm for Regression Support Vector Machines 487 Let us further analyze inequalities in (5). Each inequality has one of the following forms: where 7T'i -rPi + E + Vi + J.L > 0 rPi + E - v; J.L > 0 I rPi = Yi - 2:)aj - a;)Kij j=l Consider the values ai can possible take: (8) (9) 1. ai = O. In this case Si = C, and, by complementarity condition (7), Vi = O. Then inequality (8) becomes: 7T'i = -rPi + E + J.L ~ 0 ~ J.L ~ rPi E 2. ai = C. By complementarity condition (7), 7T'i = O. Then inequality (8) becomes: -rPi + E + J.L + Vi = 0 ~ J.L::; rPi E 3. 0 < ai < C. By complementarity condition (7), Vi = 0, 7T'i = O. Then inequality (8) becomes: - rPi + E + J.L = 0 ~ J.L = rPi E Similar reasoning for a; and inequality (9) yields the following results: 1. a; = O. Then 2. a; = C. Then 3. 0 < a; < C. Then As one can see, the only free variable in system (5) is J.L. Each inequality restricts J.L to a certain interval on a real line. Such intervals will be denoted as J.L-sets in the rest of the exposition. Any subset of inequalities in (5) is inconsistent if the intersection of the corresponding J.L-sets is empty. This provides a lucid rule for determining optimality of any working set: it is sub-optimal if the intersection of J.L-sets of all its points is empty. A sub-optimal working set will also be denoted as "inconsistent". The following summarizes the rules for calculation of J.L-sets, taking into account that for regression SVM aia; = 0: [rPi - E, rPi + E], if ai = 0, a; = 0 [rPi - E, rPi - E], if 0 < ai < C, a; = 0 (-00, rPi - E], [rPi + E, rPi + E], [rPi + E, +(0), if ai = C, a; = 0 if ai = 0, 0 < a; < C (10) 488 P. Laskov 4 Maximal Inconsistency Algorithm While inconsistency of the working set at each iteration guarantees convergence of decomposition, the rate of convergence is quite slow if arbitrary inconsistent working sets are chosen. A natural heuristic is to select "maximally inconsistent" working sets, in a hope that such choice would provide the greatest improvement of the objective function. The notion of "maximal inconsistency" is easy to define: let it be the gap between the smallest right boundary and the largest left boundary of p-sets of elements in the training set: G=L-R L = max pL R = min pr O<i<l O<i<l 1 where p!, pi are the left and the right boundaries respectively (possibly minus or plus infinity) of the p-set Mi. It is convenient to require that the largest possible inconsistency gap be maintained between all pairs of points comprising the working set. The obvious implementation of such strategy is to select q/2 elements with the largest values of pi and q/2 elements with the smallest values of pr. The maximal inconsistency strategy is summarized in Algorithm 1. Algorithm 1 Maximal inconsistency SVM decomposition algorithm. Let S be the list of all samples. while (L > R) • compute Mi according to the rules (10) for all elements in S • select q/2 elements with the largest values of pi ("left pass") • select q /2 elements with the smallest values of p r ("right pass") • re-optimize the working set Although the motivation provided for the maximal inconsistency algorithm is purely heuristic, the algorithm can be rigorously derived, in a similar fashion as Joachims' algorithm, from Zoutendijk's feasible direction problem. Details of such derivation cannot be presented here due to space constraints. Because of this relationship I will further refer to both algorithms as "feasible direction" algorithms. 5 Experimental Results Experimental evaluation of the new algorithm was performed on the modified KDD Cup 1998 data set. The original data set is available under http:j /www.ics.uci.edu/"-'kdd/databases/kddcup98/kddcup98.html. The following modifications were made to obtain a pure regression problem: • All 75 character fields were eliminated. • Numeric fields CONTROLN, ODATEDW, TCODE and DOB were elimitated. The remaining 400 features and the labels were scaled between 0 and 1. Initial subsets of the training database of different sizes were selected for evaluation of the scaling properties of the new algorithm. The training times of the algorithms, with and without decomposition, the numbers of support vectors, including bounded support vectors, and the experimental scaling factors, are displayed in Table 1. An Improved Decomposition Algorithm for Regression Support Vector Machines 489 Table 1: Training time (sec) and number of SVs for the KDD Cup problem Examples no dcmp dcmp total SV BSV 500 39 10 274 0 1000 226 41 518 3 2000 1490 158 970 5 3000 5744 397 1429 7 5000 27052 1252 2349 15 scaling factor: 2.84 2.08 SV-scaling factor: 3.06 2.24 Table 2: Training time (sec) and number of SVs for the KDD Cup problem, reduced feature space. Examples no dcmp dcmp total SV BSV 500 56 18 170 30 1000 346 44 374 62 2000 1768 198 510 144 3000 4789 366 729 222 5000 22115 863 1139 354 scaling factor: 2.55 1.72 SV-scaling factor: 3.55 2.35 The experimental scaling factors are obtained by fitting lines to log-log plots of the running times against sample sizes, in the number of examples and the number of unbounded support vectors respectively. Experiments were run on SGI Octane with 195MHz clock and 256M RAM. RBF kernel with, = 10, C = 1, termination accuracy 0.001, working set size of 20, and cache size of 5000 samples were used. A similar experiment was performed on a reduced feature set consisting of the first 50 features selected from the full-size data set. This experiment illustrates the behavior of the algorithms when the large number of support vectors are bounded. The results are presented in Table 2. 6 Discussion It comes at no surprise that the decomposition algorithm outperforms the conventional training algorithm by an order of magnitude. Similar results have been well established for pattern recognition SVM. Remarkable is the co-incidence of scaling factors of the maximal inconsistency algorithm and Joachims' algorithm: his scaling factors range from 1.7 to 2.1 [3]. I believe however, that a more important performance measure is SV -scaling factor, and the results above suggest that this factor is consistent even for problems with significantly different compositions of support vectors. Further experiments should investigate properties of this measure. Finally, I would like to mention other methods proposed in order to speed-up training of SVM, although no experimental results have been reported for these methods with regard to training of the regression SVM. Chunking [6], p. 366, iterates through 490 P. Laskov the training data accumulating support vectors and adding a "chunk" of new data until no more changes to a solution occur. The main problem with this method is that when the percentage of support vectors is high it essentially solves the problem of almost the same size more than once. Sequential Minimal Optimization (SMO), proposed by Platt [7] and easily extendable to the regression SVM [1], employs an idea similar to decomposition but always uses the working set of size 2. For such a working set, a solution can be calculated "by hand" without numerical optimization. A number of heuristics is applied in order to choose a good working set. It is difficult to draw a comparison between the working set selection mechanisms of SMO and the feasible direction algorithms but experimental results of Joachims [3] suggest that SMO is slower. Another advantage of feasible direction algorithms is that the size of the working set is not limited to 2, as in SMO. Practical experience shows that the optimal size of the working set is between 10 and 100. Lastly, traditional optimization methods, such as Newton's or conjugate gradient methods, can be modified to yield the complexity of 0(s3), where s is the number of detected support vectors [8]. This can be a considerable improvement over the methods that have complexity of 0(13), where 1 is the total number of training samples. The real challenge lies in attaining sub-0(s3) complexity. While the experimental results suggest that feasible direction algorithms might attain such complexity, their complexity is not fully understood from the theoretical point of view. More specifically, the convergence rate, and its dependence on the number of support vectors, needs to be analyzed. This will be the main direction of the future research in feasible direction SVM training algorithms. References [1] Smola, A., Sch61kopf, B. (1998) A Tutorial on Support Vector Regression. NeuroCOLT2 Technical Report NC2- TR-1998-030. [2] Osuna, E., Freund, R., Girosi, F. (1997) An Improved Training Algorithm for Support Vector Machines. Proceedings of IEEE NNSP'97. Amelia Island FL. [3] Joachims, T. (1998) Making Large-Scale SVM Learning Practical. Advances in Kernel Methods - Support Vector Learning. B. Sch61kopf, C. Burges, A. Smola, (eds.) MIT-Press. [4] Osuna, E. (1998) Support Vector Machines: Training and Applications. Ph. D. Dissertation. Operations Research Center, MIT. [5] Boot, J. (1964) Quadratic Programming. Algorithms - Anomalies - Applications. North Holland Publishing Company, Amsterdam. [6] Vapnik, V. (1982) Estimation of Dependencies Based on Empirical Data. Springer-Verlag. [7] Platt, J. (1998) Fast Training of Support Vector Machines Using Sequential Minimal Optimization. Advances in Kernel Methods - Support Vector Learning. B. Sch5lkopf, C. Burges, A. Smola, (eds.) MIT-Press. [8] Kaufman, L. (1998) Solving the Quadratic Programming Problem Arising in Support Vector Classification. Advances in Kernel Methods - Support Vector Learning. B. Sch5lkopf, C. Burges, A. Smola, (eds.) MIT-Press.
|
1999
|
83
|
1,735
|
An Analog VLSI Model of Periodicity Extraction Andre van Schaik Computer Engineering Laboratory J03, University of Sydney, NSW 2006 Sydney, Australia andre@ee.usyd.edu.au Abstract This paper presents an electronic system that extracts the periodicity of a sound. It uses three analogue VLSI building blocks: a silicon cochlea, two inner-hair-cell circuits and two spiking neuron chips. The silicon cochlea consists of a cascade of filters. Because of the delay between two outputs from the silicon cochlea, spike trains created at these outputs are synchronous only for a narrow range of periodicities. In contrast to traditional bandpass filters, where an increase in' selectivity has to be traded off against a decrease in response time, the proposed system responds quickly, independent of selectivity. 1 Introduction The human ear transduces airborne sounds into a neural signal using three stages in the inner ear's cochlea: (i) the mechanical filtering of the Basilar Membrane (BM), (ii) the transduction of membrane vibration into neurotransmitter release by the Inner Hair Cells (IHCs), and (iii) spike generation by the Spiral Ganglion Cells (SGCs), whose axons form the auditory nerve. The properties of the BM are such that close to the entrance of the cochlea (the base) the BM is most sensitive to high frequencies and at the apex the BM responds best to low frequencies. Along the BM the best-frequency decreases in an exponential manner with distance along the membrane. For frequencies below a given point's best-frequency the response drops off gradually, but for frequencies above the best-frequency the response drops off rapidly (see Fig. 1 b for examples of such frequency-gain functions). An Inner Hair Cell senses the local vibration of a section of the Basilar Membrane. The intracellular voltage of an IHC resembles a half-wave-rectified version of the local BM vibration, low-pass filtered at I kHz. The IHC voltage has therefore lost it's AC component almost completely for frequencies above about 4 kHz. Well below this frequency, however, the lHC voltage has a clear temporal structure, which will be reflected in the spike trains on the auditory nerve. These spike trains are generated by the spiral ganglion cells. These sacs spike with a probability roughly proportional to the instantaneous inner hair cell voltage. Therefore, for the lower sound frequencies, the spectrum of the input waveform is not only encoded in the form of an average spiking rate of different fibers along the An Analog VLSI Model of Periodicity Extraction 739 cochlea (place coding), but also in the periodicity of spiking of the individual auditory nerve fibers. It has been shown that this periodicity information is a much more robust cue than the spatial distribution of average firing rates [I]. Some periodicity information can already be detected at intensities 20 dB below the intensity needed to obtain a change in average rate. Periodicity information is retained at intensities in the range of 60-90 dB SPL, for which the average rate of the majority of the auditory nerve fibers is saturated. Moreover, the positions of the fibers responding best to a given frequency move with changing sound intensity, whereas the periodicity information remains constant. Furthermore, the frequency selectivity of a given fiber's spiking rate is drastically reduced at medium and high sound intensities. The robustness of periodicity information makes it likely that the brain actually uses this information. 2 Modelling periodicity extraction Several models have been proposed that extract periodicity information using the phase encoding of fibers connected to the same inner hair cell or that use the synchronicity of firing on auditory nerve fibers connected to different inner hair cells (see [2] for 4 examples of these models). The simplest of the phase encoding schemes correlate the output of the cochlea at a given position with a delayed version of itself. It is easy to see that for pure tones, the comparison sin(2 1t f t) = sin(2 1t f (t - ~» is only true for frequencies that are a multiple of 1I~, i.e., for these frequencies the signals are in perfect synchrony and thus perfectly correlated. We can adapt the delay ~ to each cochlear output, so that I/~ equals the best frequency of that cochlear output. In this case higher mUltiples of I/~ will be suppressed due to the very steep cut-off of the cochlear filters for frequencies above the best frequency. Each synchronicity detector will then only be sensitive to the best frequency of the filter to which it is connected. If we code the direct signal and the delayed signal with two spike trains, with one spike per period at a fixed phase each, it becomes a very simple operation to detect the synchronicity. A simple digital AND operator will be enough to detect overlap between two spikes. These spikes will overlap perfectly when f = 1I~, but some overlap wi II still be present for frequencies close to 1I~, since the spikes have a finite width. The bandwidth of the AND output can thus be controlled by the spike width. It is possible to create a silicon implementation of this scheme using an artificial cochlea, an IHe circuit, and a spiking neuron circuit together with additional circuits to create the delays. A chip along these lines has been developed by John Lazzaro [3] and functioned correctly. A disadvantage of this scheme, however, is the fact that the delay associated with a cochlear output has to be matched to the inverse of the best frequency of that cochlear output. For a cochlea whose best frequency changes exponentially with filter number in the cascade from 4 kHz (the upper range of phase locking on the auditory nerve) to 100 Hz, we will have to create delays that range from 0.25 ms to 10 ms. In the brain, such a large variation in delays is unlikely to be provided by an axonal delay circuit because it would require an excessively large variation in axon length. A possible solution comes from the observation that the phase of a pure tone of a given frequency on the basilar membrane increases from base to apex, and the phase changes rapidly around the best frequency. The silicon cochlea, which is implemented with a cascade of second-order low-pass filters (Fig. 1 a), also functions as a delay line, and each filter adds a delay which corresponds to 1t/2 at the cut-off frequency of that filter. If we assume that filter i and filter i-4 have the same cut-off frequency (which is not the case), the delay between the output of both filters will correspond to a full period (21t) at the cut-off frequency. 740 A. v. Schaik 20 gain 1 ____ ,i-4 .......... ······· ···,·,··· ·········i·········,·····_···· ··············-·-- Il l (dB)o \ /' \ \ '\ \ -20 Figure 1: a) Part of a silicon cochlea. Each section contains a second-order low-pass filter and a derivator; b) accumulated gain at output i and i-4; c) phase curves of the individual stages between output i and output i-4; d) proposed implementation of the periodicity extraction model. In reality, the filters along the cochlea will have different cut-off frequencies, as shown in Fig. 1. Here we show the accumulated gain at the outputs i and i-4 (Fig. 1 b), and the delay added by each individual filter between these two outputs (Fig. 1 c) as a function of frequency (normalized to the cut-off frequency of filter i). The solid vertical line represents this cut-off frequency, and we can see that only filter i adds a delay of 7t/2, and the other filters add less. However, if we move the vertical line to the right (indicated by the dotted vertical line), the delay added by each filter will increase relatively quickly, and at some frequency slightly higher than the cutoff frequency of filter i, the sum of the delays will become 27t (dashed line). At this frequency neither filter i nor filter i-4 has maximum gain, but if the cut-off frequency of both filters is not too different, the gain will stiII be high enough for both filters at the correlator frequency to yield output signals with reasonable amplitudes. The improved model can be implemented using building blocks as shown in Fig. I d. Each of these building blocks have previously been presented (refer to [4] for additional details). The silicon cochlea is used to filter and delay the signal, and has been adjusted so that the cut-off frequency decreases by one octave every twenty stages, so that the cut-off frequencies of neighboring filters are almost equal. The IHC circuit half-wave rectifies the signal in the implementation of Figure Id. The low-pass filtering of the biological Inner Hair Cell can be ignored for frequencies below the approximately I kHz cut-off frequency of the cell. Since we limited our measurements to this range, the low-pass filtering has not been modeled by the circuit. Two chips containing electronic leaky-integrate-and-fire neurons have been used to create the two spike trains. In the first series of measurements, each chip generates exactly one spike per period of the input signal. A final test will set the 32 neurons on each chip to behave more like biological spiral ganglion cells and the effect on periodicity extraction will be shown. A digital AND gate is used to compare the output spikes of the two chips, and the spike rate at the output of the AND gate is the measure of activity used. 3 Test results The first experiment measures the number of spikes per second at the output of the AND gate as a function of input frequency, using different cochlear filter combinations. Twelve filter pairs have been measured, each combining a filter An Analog VLSI Model of Periodicity Extraction 741 output with the output of a filter four sections earlier in the cascade. The best frequency of the filter with the lowest best frequency of the pairs ranged from 200 Hz to 880 Hz. The results are shown in Fig. 2a. 1S00 . __ .___ _ 1.2 a b ~ 0.8 7S0 j 0.4 Q. ·O~~~~LU~~4-~ __ ~~ ./ o 200 400 600 800 1000 1200 200 frequency (log scale) 2000 frequency (Hz) Figure 2: a) measured output rate at different cochlear positions, and b) spike rate normalized to best input frequency, plotted on a log frequency scale. The maximum spike rate increases approximately linearly with frequency; this is to be expected, since we will have approximately one spike per signal period. Furthermore the best response frequencies of the filters sensitive to higher frequencies are further apart, due to the exponential scaling of the frequencies along the cochlea. Finally, a given time delay corresponds to a larger phase delay for the higher frequencies, so that the absolute bandwidth of the coincidence detectors, I.e., the range of input frequencies to which they respond, is larger. When we normalize the spike rate and plot the curves on a logarithmic frequency scale, as in Fig. 2b, we see that the best frequencies of the correlators follow the exponential scaling of the best frequencies of the cochlear filters, and that the relative bandwidth is fairly constant. 1200 ,-.. ___ . _________ -,-_--.".-,~ spike a 1----20mV _ .. ~, __ 30mV ra te ~I';"" 'J\ ' !, I 1----- 40mV 600 . I ! , • I , , : , o~ _____ ~J~'-~ ______ ~ SS8 6S1 744 837 9301023111612091302 Input frequency (Hz) 1200 ._ .... _ .. __ .. _. __ ._ ... _. __ . __ ..... _ ...... _____ ._._. b 1----20mV ! spike __ 30mV rate ' 600 O~~~~~~_~~~~~ 558 651 744 837 930 1023111612091302 carrier frequency (Hz) Figure 3: Frequency selectivity for different input intensities. a) pure tones b) AM signals. Using the same settings as in the previous experiment, the output spike rate of the system for different input amplitudes has been measured, using the cochlear filter pair with best frequencies of 710Hz and 810Hz. In principle, the amplitude of the input signal should have no effect on the output of the system, since the system only uses phase information. However, this is only true if the spikes are always created at the same phase of the output signal of the cochlear filters, for instance at the peak, or the zero crossing. Fig. 3 shows however that the resulting filter selectivity shifts to lower frequencies for higher intensity input signals. This is a result of the way the spikes are created on the neuron chip. The neurons have been adjusted to spike once per period, but the phase at which they spike with respect to the half-wave-rectified waveform depends on the integration time of the neuron, which is the time needed with a given input current to reach the spike threshold voltage from the zero resting voltage. This time depends on the amplitude of the input current, which in tum is proportional to the amplitude of the input signal. Since the amplitude gain of the two cochlear filters used is not the same, the amplitude of the current input to the two neuron chips is different. Therefore, they do not spike at the same phase with respect to their respective input waveforms. This causes the frequency selectivity of the system to shift to lower frequencies with increasing intensity. However, this is an artifact of the spike generation used to 742 A. v. Schaik simplify the system. On the auditory nerve, spikes arrive with a probability roughly proportional to the half-wave rectified wavefonn. The most probable phase for a spike is therefore always at the maximum of the wavefonn, independent of intensity. In such a system, the frequency selectivity will therefore be independent of amplitude. A second advantage of coding (at least half of) the wavefonn in spike probability is that it does not assume that the input wavefonn is sinusoidal. Coding a wavefonn with just one spike per period can only code the frequency and phase of the wavefonn, but not its shape. A square wave and a sine wave would both yield the same spike train. We will discuss the "auditory-nerve-like" coding at the end of this section. To test the model with a more complex waveform, a 930 Hz sine wave 100% amplitude-modulated at 200 Hz generated on a computer has been used. The carrier frequency was varied by playing the whole wavefonn a certain percentage slower or faster. Therefore the actual modulation frequency changes with the same factor as the carrier frequency. The results of this test are shown in Fig. 3b for three different input amplitudes. Compared to the measurements in Fig. 3a, we see that the filter is less selective and centered at a higher input frequency. The shift towards a higher frequency can be explained by the fact that the average amplitude of a half-wave rectified amplitude modulated signal is lower than in for a half-wave rectified pure tone with the same maximum amplitude. Furthennore, the amplitude of the positive half-cycle of the output of the IHC circuit changes from cycle to cycle because of the amplitude modulation. We have seen that the amplitude of the input signal changes the frequency for which the two spike trains are synchronous, which means that the frequency which yields the best response changes from cycle to cycle with a periodicity equal to the modulation frequency. This introduces a sort of "roaming" of the frequencies in the input signa], effectively reducing the selectivity of the filters. Finally, because of the 100% depth of the amplitude modulation, the amplitude of the input will be too low during some cycles to create a spike, which therefore reduces the total number of spikes which can coincide. Fig. 3b shows that this model detects periodicity and not spectral content. The spectrum of a 930 Hz pure tone 100% amplitude modulated at 200 Hz contains, apart from a 930 Hz carrier component, components both at 730 Hz and 1130 Hz, with half the amplitude of the carrier component. When the speed of the wavefonn playback is varied so that the carrier frequency is either 765 Hz or 1185 Hz, one of these spectral side bands will be at 930 Hz, but the system does not respond at these carrier frequencies. This is explained by the fact that the periodicity of the zero crossings, and thus of the positive half cycles of the IHC output, is always equal to the carrier frequency. Traditional band-pass filters with a very high quality factor (Q) can also yield a narrow pass-band, but their step response takes about ] .5Q cycles at the center frequency to reach steady state. The periodicity selectivity of the synchronicity detector shown in Fig. 3a corresponds to a quality factor of ] 4; a traditional bandpass filter would take about 2] cycles of the 930Hz input signal to reach 95% of it's final output value. Fig. 4 shows the temporal aspect of the synchronicity detection in our system. The top trace in this figure shows the output of the cochlear filter with the highest best frequency (index i-4 in Fig. 1) and the spikes generated based on this output. The second trace shows the same for the output of the cochlear filter with the lower best frequency (index i in Fig. ]). The third trace shows the output of the AND gate with the above inputs, which are slightly above its best periodicity. Coincidences are detected at the onset of the tone, even when it is not of the correct periodicity, but only for the first one or two cycles. The bottom trace shows the output of the AND gate for an input at best frequency. The system thus responds to the presence of a pure tone of the correct periodicity after only a few cycles, independent of the filters selectivity. An Analog VLSI Model of Periodicity Extraction I--~ I ~ I b J I I I time (ms) Figure 4: Oscilloscope traces of the temporal aspect of synchronicity detection. The vertical scale is 20mY per square for the cochlear outputs, the spikes are 5Y in amplitude. 743 To show this more dramatically, we have reduced the spike width to lOllS, to obtain a high periodicity selectivity as shown in Fig. 5a. The bandwidth of this filter is only 20 Hz at 930 Hz, equivalent to a quality factor of 46.5. A traditional filter with such a quality factor would only settle 70 cycles after the onset of the signal, whereas the periodicity detector still settles after the first few cycles, as shown in Fig. 5b. We can compare this result with the response of a classic RLC band-pass filter with a 930 Hz center frequency and a quality factor of 46.5 as shown in Fig. 6. After 18 cycles of the input signal, the output of the band-pass filter has only reached 65% of its final value. Thresholding the RLC output could signal the presence of a periodicity faster, but it would then still respond very slowly to the offset of the tone as the RLC filter wi II continue ringing after the offset. 800 - -------, spike I rate 400 I I I I 0 830 930 1030 880 980 Input frequency (Hzl 0 5 10 15 time (ms) 20 gain 05 Figure 5: a) Frequency selectivity with a lOlls spike width. b) Cochlear output (top, 40 mY scale) and coincidences (bottom) for a signal at best frequency. / \ \, : ~ \JW\MN\N ! \ OL---______ ~--__ --~ 830 1030 65% 880 930 980 Input frequency (Hzl o 5 10 15 time (ms) 20 Figure 6: Simulated response of the RLC band-pass filter. a) frequency selectivity, b) transient response (scale units are 40 mY). In the previous experiments we simplified the model to use one spike per period in order to understand the principle behind the periodicity detection. However, we have seen that this implementation leads to a shift in best periodicity with changing amplitude, because the phase at which the 'single neuron' spikes changes with intensity. Now, we will change the settings to be more realistic, so that each of the 32 neurons cannot spike at every period, and we will reduce the output gain of the IHC circuit so that the neurons receive less signal current, and thus have a lower input SNR. The resulting spike distribution is a better simulation of the spike distribution on the auditory nerve. This is shown in Fig. 7 for a group of 32 neurons stimulated by and IHC circuit connected to a single cochlear output. The bottom trace shows the sum of spikes over the 32 neurons on an arbitrary scale. When we 744 A. v. Schaik use this spike distribution and repeat the pure-tone detection experiment of Fig. 3a at different input intensities, we obtain the curve of Fig. 7b. Indeed, in this case, the best periodicity does not change; the curves are remarkably independent of input intensity. However, the selectivity curve is about twice as wide at the base as the ones in Fig. 3a, but the slopes of the selectivity curve rise and fall much more gradually. This means that we can easily increase the selectivity of these curves by setting a higher threshold, e.g., discarding spike rates below 70 spikes per second. Because of the steep slopes in Fig. 3a such an operation would hardly increase the selectivity for that case. o 10 15 time (ms) 20 140 _ •. __ . _____ .... _.,--_=:;-, ____ 20"'" spike rate 70 __ 30mV _ .. _ . 40"", O~~~ __ ~~~ __ __ ~ 558 651 744 837 930 1023 11161209 1302 Input freque ncy (Hz) Figure 7: a) Cochlear output (top) and population average of the auditory nerve spikes (bottom); b) periodicity selectivity with auditory nerve like spike distribution. 4 Conclusions In this paper we have presented a neural system for periodicity detection implemented with three analogue VLSI building blocks. The system uses the delay between the outputs at two points along the cochlea and synchronicity of the spike trains created from these cochlear outputs to detect the periodicity of the input signal. An especially useful property of the cochlea is that the delay between two points a fixed distance apart corresponds to a full period at a frequency that scales in the same way as the best frequency along the cochlea, I.e., decreases exponentially. If we always create spikes at the same phase of the output signal at each filter, or simply have the highest spiking probability for the maximum instantaneous amplitude of the output signal, then both outputs will only have synchronous spikes for a certain periodicity, and we can easily detect this synchronicity with coincidence detectors. This system offers a way to obtain very selective filters using spikes. Even though they react to a very narrow range of periodicities, these filters are able to react after only a few periods. Furthermore, the range of periodicities it responds to can be made independent of input intensity, which is not the case with the cochlear output itself. This clearly demonstrates the advantages of using spikes in the detection of periodicity. Acknowledgements The author thanks Eric Fragniere, Eric Vittoz and the Swiss NSF for their support. References [1] Evans, "Functional anatomy of the auditory system," in Barlow and MoHon (editors), The Senses, Cambridge University Press, pp. 251-306, 1982. [2] Seneff, Shamma, Deng, & Ghitza, Journal of Phonetics, Vol. 16, pp. 55-123, 1988. [3] Lazzaro, "A silicon model of an auditory neural representation of spectral shape." IEEE Journal of Solid-State Circuits, Vol. 26, No.5, pp. 772-777, 1991. [4] van Schaik, "An Analogue VLSI Model of Periodicity Extraction in the Human Auditory System," to appear in Analog Integrated Circuits and Signal Processing, Kluwer, 2000. PART VI SPEECH, HANDWRITING AND SIGNAL PROCESSING
|
1999
|
84
|
1,736
|
From Coexpression to Coregulation: An Approach to Inferring Transcriptional Regulation among Gene Classes from Large-Scale Expression Data Eric Mjolsness Jet Propulsion Laboratory California Institute of Technology Pasadena CA 91109-8099 mjolsness@jpl.nasa.gov Rebecca Castaiio Jet Propulsion Laboratory California Institute of Technology Pasadena CA 91109-8099 becky@aigjpl.nasa.gov Tobias Mann Jet Propulsion Laboratory California Institute of Technology Pasadena CA 91109-8099 mann@aigjpl.nasa.gov Barbara Wold Division of Biology California Institute of Technology Pasadena CA 91125 woldb@its.caltech.edu Abstract We provide preliminary evidence that eXlstmg algorithms for inferring small-scale gene regulation networks from gene expression data can be adapted to large-scale gene expression data coming from hybridization microarrays. The essential steps are (1) clustering many genes by their expression time-course data into a minimal set of clusters of co-expressed genes, (2) theoretically modeling the various conditions under which the time-courses are measured using a continious-time analog recurrent neural network for the cluster mean time-courses, (3) fitting such a regulatory model to the cluster mean time courses by simulated annealing with weight decay, and (4) analysing several such fits for commonalities in the circuit parameter sets including the connection matrices. This procedure can be used to assess the adequacy of existing and future gene expression time-course data sets for determ ining transcriptional regulatory relationships such as coregulation. 1 Introduction In a cell, genes can be turned "on" or "off' to varying degrees by the protein products of other genes. When a gene is "on" it is transcribed to produce messenger RNA (mRNA) which can subsequently be translated into protein molecules. Some of these proteins are transcription factors which bind to DNA at specific sites and thereby affect which genes are transcribed and how often. This trancriptional Inferring Transcriptional Regulation among Gene Classes 929 regulation feedback circuitry provides a fundamental mechanism for information processing in the cell. It governs differentiation into diverse cell types and many other basic biological processes. Recently, several new technologies have been developed for measuring the "expression" of genes as mRNA or protein product. Improvements in conventional f1uorescently labeled antibodies against proteins have been coupled with confocal microscopy and image processing to partially automate the simultaneous measurement of small numbers of proteins in large numbers of individual nuclei in the fruit fly Drosophila melanogaster [1]. In a complementary way, the mRNA levels of thousands of genes, each averaged over many cells, have been measured by hybridization arrays for various species including the budding yeast Saccharomyces cerevisiae [2] . The high-spatial-resolution protein antibody data has been quantitatively modeled by "gene regulation network" circuit models [3] which use continuous-time, analog, recurrent neural networks (Hopfield networks without an objective function) to model transcriptional regulation [4][5] . This approach requires some machine learning technique to infer the circuit parameters from the data, and a particular variant of simulated annealing has proven effective [6][7]. Methods in current biological use for analysing mRNA hybridization data do not infer regulatory relationships, but rather simply cluster together genes with similar patterns of expression across time and experimental conditions [8][9]. In this paper, we explore the extension of the gene circuit method to the mRNA hybridization data which has much lower spatial resolution but can currently assay a thousand times more genes than immunofluorescent image analysis. The essential problem with using the gene circuit method, as employed for immunoflourescence data, on hybridization data is that the number of connection strength parameters grows between linearly and quadratically in the number of genes (depending on sparsity assumptions) . This requires more data on each gene, and even if that much data is available, simulated annealing for circuit inference does not seem to scale well with the number of unknown parameters. Some form of dimensionality reduction is called for. Fortunately dimensionality reduction is available in the present practice of clustering the large-scale time course expression data by genes, into gene clusters. In this way one can derive a small number of cluster-mean time courses for "aggregated genes", and then fit a gene regulation circuit to these cluster mean time courses. We will discuss details of how this analysis can be performed and then interpreted. A similar approach using somewhat different algorithms for clustering and circuit inference has been taken by Hertz [10]. In the following, we will first summarize the data models and algorithms used, and then report on preliminary experiments in applying those algorithms to gene expression data for 2467 yeast genes [9][11]. Finally we will discuss prospects for and limitations of the approach. 2 Data Models and Algorithms The data model is as follows. We imagine that there is a small, hidden regulatory network of "aggregate genes" which regulate one another by the analog neural network dynamics [3] T . dv; = g(~ T.v + h) - Xv. I dt ~ 1/ / I I I J 930 E. Mjolsness, T Mann, R. Castano and B. Wold In which Vi is the continuous-valued state variable for gene product i, ~j is the matrix of positive, zero, or negative connections by which one transcription factor can enhance or repress another, and gO is a nonlinear monotonic sigmoidal activation function. When a particular matrix entry ~j is nonzero, there is a regulatory "connection" from gene product} to gene i. The regulation is enhancing if T is positive and repressing if it is negative. If ~j is zero there is no connection. This network is run forwards from some initial condition and time-sampled to generate a wild-type time course for the aggregate genes. In addition, various other time courses can be generated under alternative experimental conditions by manipulating the parameters. For example an entire aggregate gene (corresponding to a cluster of real genes) could be removed from the circuit or otherwise modified to represent mutants. External input conditions could be modeled as modifications to h. Thus we get one or several time courses (trajectories) for the aggregate genes. From such aggregate time courses, actual gene data is generated by addition of Gaussian-distributed noise to the logarithms of the concentration variables. Each time point in each cluster has its own scalar standard deviation parameter (and a mean arising from the circuit dynamics). Optionally, each gene's expression data may also be multiplied by a time-independent proportionality constant. Regulatory aggregate genes (large circles) and cluster member genes (small circles). T Given this data generation model and suitable gene expression data, the problem is to infer gene cluster memberships and the circuit parameters for the aggregate genes' regulatory relationships. Then, we would like to use the inferred cluster memberships and regulatory circuitry to make testable biological predictions. This data model departs from biological reality in many ways that could prove to be important, both for inference and for prediction. Except for the Gaussian noise model, each gene in a cluster is models as fully coregulated with every other one they are influenced in the same ways by the same regulatory connection strengths. Also, the nonlinear circuit model must not only reflect transcriptional regulation, but all other regulatory circuitry affecting measured gene expression such as kinasephosphatase networks. Under this data model, one could formulate a joint Bayesian inference problem for the clustering and circuit inference aspects of fitting the data. But given the highly provisional nature of the model, we simply apply in sequence an existing mixtureof-Gaussians clustering algorithm to preprocess the data and reduce its dimensionality, and then an existing gene circuit inference algorithm. Presumably a joint optimization algorithm could be obtained by iterating these steps. 2.1 Clustering A widely used clustering algorithm for mixure model estimation is ExpectationMaximization (EM)[12]. We use EM with a diagonal covariance in the Gaussian, so that for each feature vector component a (a combination of experimental condition Inferring Transcriptional Regulation among Gene Classes 931 and time point in a time course) and cluster a there is a standard deviation parameter G aa . In preprocessing, each concentration data point is divided by its value at time zero and then a logarithm taken. The log ratios are clustered using EM. Optionally, each gene's entire feature vector may be normalized to unit length and the cluster centers likewise normalized during the iterative EM algorithm. In order to choose the number of clusters, k, we use the cross-validation algorithm described by Smyth [13]. This involves computing the likelihood of each optimized fit on a test set and averaging over runs and over divisions of the data into training and test sets. Then, we can examine the likelihood as a function of k in order to choose k. Normally one would pick k so as to maximize cross-validated likelihood. However, in the present application we also want to reward small values of k which lead to smalIer circuits for the circuit inference phase of the algorithm. The choice of k will be discussed in the next section. 2.2 Circuit Inference We use the Lam-Delosme variant of simulated annealing (SA) to derive connection strengths T, time constants t, and decay rates f..., as in previous work using this gene circuit method [4][5]. We set h to zero. The score function which SA optimizes is S(T,r,A) = AI(v;(t;T,r,A)-vi(t»)2 + WI7;j2 h ij +exp[B(I7;i2 + I A7 + Ir;)]-l ij The first term represents the fit to data Vi. The second term is a standard weight decay term. The third term forces solutions to stay within a bounded region in weight space. We vary the weight decay coefficient W in order to encourage relatively sparse connection matrix solutions. 3 Results 3.1 Data We used the Saccharomyces cerevisiae data set of [9]. It includes three longer time courses representing different ways to synchronize the normal cell cycle [II], and five shorter time courses representing altered conditions. We used all eight time courses for clustering, but just 8 time points of one of the longer time courses (alpha factor synchronized cell cycle) for the circuit inference. It is likely that multiple long time courses under altered conditions will be required before strong biological predictions can be made from inferred regulatory circuit models. 3.2 Clustering We found that the most likely number of classes as determined by cross validation was about 27, but that there is a broad plateau of high-likelihood cluster numbers from 15 to 35 (Figure I). This is similar to our results with another gene expression data set for the nematode worm Caenorhabditis e/egans supplied by Stuart Kim; these more extensive clustering experiments are summarized in Figure 2. Clustering experiments with synthetic data is used to understand these results. These experiments show that the cross-validated log likelihood curve can indicate the number of clusters present in the data, justifying the use of the curve for that 932 E. Mjolsness, T. Mann, R. Castano and B. Wold purpose. In more detail, synthetic data generated from 14 20-dimensional spherical Gaussian clusters were clustered using the EMlCV algorithm. The likelihoods showed a sharp peak at k=14 unlike Figures 1 or 2. In another experiment, 14 20dimensional spherical Gaussian superclusters were used to generate second-level clusters (3 subclusters per supercluster), which in turn generated synthetic data points. This two-level hierarchical model was then clustered with the EMlCV method. The likelihood curves (not shown) were quite similar to Figures 1 and 2, with a higher-likelihood plateau from roughly 14 to 40. x 10" ~, :~ :1; i , ./ I l ~'.~ ; --~--~,~ ~ ---7.1~--~~~ ' ---=a--~~7---~~--~~ ero.V ..... d~ Figure 1. Cross-validated log-likelihood scores, displayed and averaged over 5 runs, for EM clustering of S. cerevisiae gene expression data [9]. Horizontal axis: k, the "requested" or maximal number of cluster centers in the fit. Some cluster centers go unmatched to data. Vertical axis: log likelihood score for the fit, scatterplotted and' averaged. Likelihoods have not been integrated over any range of parameters for hypothesis testing. k ranges from 2 to 40 in increments of 1. Solid line shows average likelihood value for each k. + ~10o-----:':-'.-----::20o----,. c!=----~",-----; .. 7---!" Number of Clusters ~, + ++ _u + +++ ~~---~~--~,~~--~,~~--~~~--~* Number of Clusters Figure 2. Cross-validated log-likelihood scores, averaged over 13 runs, for EM clustering of C. elegans gene expression data from S. Kim's lab. Horizontal axis: k, the "requested" or maximal number of cluster centers in the fit. Some cluster centers go unmatched to data. Vertical axis: log likelihood score for the fit, as an average over 13 runs plus or minus one standard deviation. (Left) Fine-scale plot, k =2 to 60 in increments of 2. (Right) Coarsescale plot, k=2 to 202 in increments of 10. Both plots show an extended plateau of relatively likely fits between roughly k =14 and k =40. From Figures 1 and 2 and the synthetic data experiments mentioned above, we can guess at appropriate values for k which take into account both the measured likelihood of clustering and the requirements for few parameters in circuit-fitting. For example choosing k=15 clusters would put us at the beginning of the plateau, losing very little cluster likelihood in return for reducing the aggregate genes circuit size from 27 to 15 players. The interpretation would be that there are about 15 superclusters in hierarchically clustered data, to which we will fit a 15-player Inferring Transcriptional Regulation among Gene Classes 933 regulatory circuit. Much more aggressive would be to pick k=7 or 8 clusters, for a relatively significant drop in log-likelihood in return for a further substantial decrease in circuit size. An acceptable range of cluster numbers (and circuit sizes) would seem to be k=8 to 15. 3.3 Gene Circuit Inference It proved possible to fit the k= 15 time course using weight decay W=1 but without using hidden units. W=O and W=3 gave less satisfactory results. Four of the 15 clusters are shown in Figure 3 for one good run (W= 1). Scores for our first few (unselected) runs at the current parameter settings are shown in Table 1. Each run took between 24 and 48 hours on one processor of an Sun UItrasparc 60 computer. Even with weight decay, it is possible that successful fits are really overfits with this particular data since there are about twice as many parameters as data points. Weight <Score> <Simulated Annealing Number of runs Decay W Moves>/1 0/\6 0 1.536 +/- 0.134 2.803 +/- 0.437 3 0.787 +/- 0.394 2.782 +/- 0.200 10 3 1.438 +/- 0.037 2.880 +/- 0.090 4 Table 1. Score function parameters were A= 1.0. B=O.O 1. Annealing runs statistics are reported when the temperature dropped below 0.0001. All the best scores and visually acceptable fits occurred in W=I runs. The average values of the data fit, weight decay, and penalty terms in the score function for W=1 were {0.378, 0.332, 0.0667} after slightly more annealing. There were a few significant similarities between the connection matrices computed in the two lowest-scoring runs. The most salient feature in the lowest-scoring network was a set of direct feedback loops among its strongest connections: cluster 8 both excited and was inhibited by cluster 10, and cluster 10 excited and was inhibited by cluster 15. This feature was preserved in the second-best run. A systematic search for "concensus circuitry" shows convergence towards a unique connection matrix for the 8-point time series data used here, but more complete 16time-point data gives mUltiple "clusters" of connection matrices. From parametercounting one might expect that making robust and unique regulatory predictions will require the use of more trajectory data taken under substantially different conditions. Such data is expected to be forthcoming. 4 Discussion We have illustrated a procedure for deriving regulatory models from large-scale gene expression data. As the data becomes more comprehensive in the number and nature of conditions under which comparable time courses are measured, this procedure can be used to determine when biological hypotheses about gene regulation can be robustly derived from the data. Acknowledgments This work was supported in part by the Whittier Foundation, the Office of Naval Research under contract NOOO 14-97-1-0422, and the NASA Advanced Concepts Program. Stuart Kim (Stanford University) provided the C. elegans gene expression array data. The GRN simulation and inference code is due in part to Charles Garrett and George Marnellos. The EM clustering code is due in part to Roberto Manduchi. 934 E. Mjolsness, T. Mann, R. Castano and B. Wold 't ~ : : : : ~ i 0.5,'-----'2---'-3 ---.'-----'-5-6'-------'-7---'8::v:: ===:=;; J J ; : ; ; ; :~ , 2 3 4 5 6 7 8 :~ , 2 3 4 5 6 7 8 ongIIlIII doll JIIII,1<od,.11h an •. gm ftl JIIIIrkeel WIth an 0 Figure 3. Four clusters (numbers 9-12) of a 15-c1uster mixture of Gaussians model of 2467 genes each assayed over an eight-point time course; cluster means (shown as x) are fit to a gene regulation network model (shown as 0). References [I] D. Kosman, J. Reinitz, and D. H. Sharp, "Automated Assay of Gene Expression at Cellular Resolution " Pacific Symposium on Biocomputing '98. Eds. R. Altman, A. K. Dunker, L. Hunter, and T. E. Klein" World Scientific 1998. [2] J. L. DeRisi, V. R. IyeT, and P. O. Brown, "Exploring the Metabolic and Genetic Control of Gene Expreession on a Genomic Scale". Science 278, 680-686. [3] A Connectionist Model of Development, E. Mjolsness, D. H. Sharp, and J. Reinitz, Journal of Theoretical Biology 152:429-453, 1991. [4] J. Reinitz, E. Mjolsness, and D. H. Sharp, "Model for Cooperative Control of Positional Information in Drosophila by Bicoid and Maternal Hunchback". 1. Experimental Zoology 271:47-56, 1995. Los Alamos National Laboratory Technical Report LAUR-92-2942 1992. [5] 1. Reinitz and D. H. Sharp, "Mechanism of eve Stripe Formation". Mechanisms of Development 49: 133-158, 1995. [6] [7] J. Lam and 1. M. Delosme. "An Efficient Simulated Annealing Schedule: Derivation" and" ... Implementation and Evaluation". Technical Reports 8816 and 8817, Yale University Electrical Engineering Department, New Haven CT 1988. [8] x. Wen, S. Fuhrman, G. S. Michaels, D. B. Carr, S. Smith, J. L. Barker, and R. Somogyi, "Large-Scale Temporal Gene Expression Mapping of Central Nervous System Development", Proc. Natl. Acal. Sci. USA 95:334-339, January 1998. [9] M. B. Eisen, P. T. Spellman, P. O. Brown, and D. Botstein, "Cluster Analysis and Display of Genome-Wide Expression Patterns", Proc. Natl. Acad. Scie. USA 95:1486314868, December 1998. [10] 1. Hertz, lecture at Krogerrup Denmark computational biology summer school, July 1998. [11] Spellman PT, Sherlock G, Zhang MQ, et aI., "Comprehensive identification of cell cycle-regulated genes of the yeast Saccharomyces cerevisiae by microarray hybridization", Mol. Bio. Cell. 9: (12) 3273-3297 Dec 1998. [12] Dempster, A. P., Laird, N. M. and Rubin, D. B. "Maximum likelihood from incomplete data via the EM algorithm," 1. Royal Statistical Society, Series B, 39:1-38,1977. [13] P. Smyth, "Clustering using Monte Carlo Cross-Validation", Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining, AAAI Press, 1996.
|
1999
|
85
|
1,737
|
Data Visualization and Feature Selection: New Algorithms for Nongaussian Data Howard Hua Yang and John Moody Oregon Graduate Institute of Science and Technology 20000 NW, Walker Rd., Beaverton, OR97006, USA hyang@ece.ogi.edu, moody@cse.ogi.edu, FAX:503 7481406 Abstract Data visualization and feature selection methods are proposed based on the )oint mutual information and ICA. The visualization methods can find many good 2-D projections for high dimensional data interpretation, which cannot be easily found by the other existing methods. The new variable selection method is found to be better in eliminating redundancy in the inputs than other methods based on simple mutual information. The efficacy of the methods is illustrated on a radar signal analysis problem to find 2-D viewing coordinates for data visualization and to select inputs for a neural network classifier. Keywords: feature selection, joint mutual information, ICA, visualization, classification. 1 INTRODUCTION Visualization of input data and feature selection are intimately related. A good feature selection algorithm can identify meaningful coordinate projections for low dimensional data visualization. Conversely, a good visualization technique can suggest meaningful features to include in a model. Input variable selection is the most important step in the model selection process. Given a target variable, a set of input variables can be selected as explanatory variables by some prior knowledge. However, many irrelevant input variables cannot be ruled out by the prior knowledge. Too many input variables irrelevant to the target variable will not only severely complicate the model selection/estimation process but also damage the performance of the final model. Selecting input variables after model specification is a model-dependent approach[6]. However, these methods can be very slow if the model space is large. To reduce the computational burden in the estimation and selection processes, we need modelindependent approaches to select input variables before model specification. One such approach is 6-Test [7]. Other approaches are based on the mutual information (MI) [2, 3,4] which is very effective in evaluating the relevance of each input variable, but it fails to eliminate redundant variables. In this paper, we focus on the model-independent approach for input variable selec688 H. H. Yang and J. Moody tion based on joint mutual information (JMI). The increment from MI to JMI is the conditional MI. Although the conditional MI was used in [4] to show the monotonic property of the MI, it was not used for input selection. Data visualization is very important for human to understand the structural relations among variables in a system. It is also a critical step to eliminate some unrealistic models. We give two methods for data visualization. One is based on the JMI and another is based on Independent Component Analysis (ICA). Both methods perform better than some existing methods such as the methods based on PCA and canonical correlation analysis (CCA) for nongaussian data. 2 Joint mutual information for input/feature selection Let Y be a target variable and Xi'S are inputs. The relevance of a single input is measured by the MI I(Xi;Y) = K(p(Xj,y)llp(Xi)P(Y)) where K(pllq) is the Kullback-Leibler divergence of two probability functions P and q defined by K(p(x)llq(x)) = Lx p(x) log~. The relevance of a set of inputs is defined by the joint mutual information I(Xi' ... , Xk; Y) = K(P(Xi' ... , Xk, y)llp(Xi' ... , Xk)P(Y))· Given two selected inputs Xj and Xk, the conditional MI is defined by I(Xi; YIXj, Xk) = L p(Xj, xk)K(p(Xi' ylxj, xk)llp(xilxj, xk)p(ylxj, Xk)). Similarly define I(Xi; YIXj, ... , Xk) conditioned on more than two variables. The conditional MI is always non-negative since it is a weighted average of the Kullback-Leibler divergence. It has the following property I(XI. · ··, Xn- l , Xn; Y) - I(XI,···, Xn- l ; Y) = I(Xn; YIXI , ···, Xn-I) 2: o. Therefore, I(XI,··· , Xn- l, Xn; Y) 2: I(XI ,···, Xn- l ; Y), i.e., adding the variable Xn will always increase the mutual information. The information gained by adding a variable is measured by the conditional MI. When Xn and Yare conditionally independent given Xl,· · ·, X n - l , the conditional MI between Xn and Y is (1) so Xn provides no extra information about Y when Xl,·· ·,Xn - l are known. In particular, when Xn is a function of Xl, .. . , Xn- l , the equality (1) holds. This is the reason why the joint MI can be used to eliminate redundant inputs. The conditional MI is useful when the input variables cannot be distinguished by the mutual information I(Xi;Y). For example, assume I(XI;Y) = I(X2;Y) I(X3; Y), and the problem is to select (Xl, X2), (Xl, X3) or (X2' X3) . Since I(XI,X2;Y) - I(XI,X3;Y) = I(X2;YIXI) - I(X3;YIXt}, we should choose (Xl, X2) rather than (Xl, X3) if I(X2; YIXI ) > I(X3; YIXI ). Otherwise, we should choose (Xl, X3). All possible comparisons are represented by a binary tree in Figure 1. To estimate I(X I, . . . , Xk; Y), we need to estimate the joint probability P(XI,·· ·, Xk, y). This suffers from the curse of dimensionality when k is large. Data Visualization and Feature Selection 689 Sometimes, we may not be able to estimate high dimensional MI due to the sample shortage. Further work is needed to estimate high dimensional joint MI based on parametric and non-parametric density estimations, when the sample size is not large enough. In some real world problems such as mining large data bases and radar pulse classification, the sample size is large. Since the parametric densities for the underlying distributions are unknown, it is better to use non-parametric methods such as histograms to estimate the joint probability and the joint MI to avoid the risk of specifying a wrong or too complicated model for the true density function. (xl. x2) (xl.x3) "' A. .~ l(Xl ;Y\X3»:1(X2;Y\X3y\!(XLY\X3)<1(X2;Y\X3) 1(Xl.Y\X2»:1(X3;Y\X/, ,\1;Y\X2l<1(X3;Y\X2) /"\ (xl.x2) (x2.x3) (x1 .xl) (xl,x3) Figure 1: Input selection based on the conditional MI. In this paper, we use the joint mutual information I(Xi, Xj; Y) instead of the mutual information I(Xi; Y) to select inputs for a neural network classifier. Another application is to select two inputs most relevant to the target variable for data visualiz ation. 3 Data visualization methods We present supervised data visualization methods based on joint MI and discuss unsupervised methods based on ICA. The most natural way to visualize high-dimensional input patterns is to display them using two of the existing coordinates, where each coordinate corresponds to one input variable. Those inputs which are most relevant to the target variable corresponds the best coordinates for data visualization, Let (i*, j*) = arg maxU,nI(Xi, Xj; Y). Then, the coordinate axes (Xi-, Xj-) should be used for visualizing the input patterns since the corresponding inputs achieve the maximum joint MI. To find the maximum I(Xj-, Xj-IY), we need to evaluate every joint MI I(Xi' Xj; Y) for i < j. The number of evaluations is O(n2 ) . Noticing that I(Xj,Xj;Y) = I(Xi;Y) + I(Xj;YIXi), we can first maximize the MI I(Xi; Y), then maximize the conditional MI. This algorithm is suboptimal, but only requires n - 1 evaluations of the joint MIs. Sometimes, this is equivalent to exhaustive search. One such example is given in next section. Some existing methods to visualize high-dimensional patterns are based on dimensionality reduction methods such as PCA and CCA to find the new coordinates to display the data, The new coordinates found by PCA and CCA are orthogonal in Euclidean space and the space with Mahalanobis inner product, respectively. However, these two methods are not suitable for visualizing nongaussian data because the projections on the PCA or CCA coordinates are not statistically independent for nongaussian vectors. Since the JMI method is model-independent, it is better for analyzing nongaussian data. 690 H H Yang and J. Moody Both CCA and maximumjoint MI are supervised methods while the PCA method is unsupervised. An alternative to these methods is ICA for visualizing clusters [5]. The ICA is a technique to transform a set of variables into a new set of variables, so that statistical dependency among the transformed variables is minimized. The version of ICA that we use here is based on the algorithms in [1, 8]. It discovers a non-orthogonal basis that minimizes mutual information between projections on basis vectors. We shall compare these methods in a real world application. 4 Application to Signal Visualization and Classification 4.1 Joint mutual information and visualization of radar pulse patterns Our goal is to design a classifier for radar pulse recognition. Each radar pulse pattern is a 15-dimensional vector. We first compute the joint MIs, then use them to select inputs for the visualization and classification of radar pulse patterns. A set of radar pulse patterns is denoted by D = {(zi, yi) : i = 1"", N} which consists of patterns in three different classes. Here, each Zi E R t5 and each yi E {I, 2, 3}. , 14 I~ " MIl"" mlormabon 0 CondIIionai NI given X2 2 12 1 i e::E 0.8 J 1 ~ 106 to 8 I ;; t>: 0.' , t> I> 02 ~ I> 0 0 0 ~ . '.a .. ,. 0 0 0 0 " .~:: : 0 I> .. 00 '0 15 02 0 1J J 11 1 2 3 .4 5 6 7 8 9 10 11 12 13 14 15 Inputvanablallldex bundle nutrtJer (a) (b) Figure 2: (a) MI vs conditional MI for the radar pulse data; maximizing the MI then the conditional MI with O(n) evaluations gives I(Xil' Xii; Y) = 1.201 bits. (b) The joint MI for the radar pulse data; maximizing the joint MI gives I(Xi. ,Xj-; Y) = 1.201 bits with O(n2) evaluations of the joint MI. (il' it) = (i* , j*) in this case. Let it = arg maxJ(Xi;Y) and it = arg maXj;tiJ(Xj;YIXi1 ). From Figure 2(a), we obtain (it,it) = (2,9) and I(XillXjl;Y) = I(Xil;Y) + I(Xj1;YIXi1 ) = 1.201 bits. If the number of total inputs is n, then the number of evaluations for computing the mutual information I(Xi; Y) and the conditional mutual information I(Xj; YIXiJ is O(n). To find the maximum I(Xi-, Xj>; Y), we evaluate every I(Xi, Xj; Y) for i < j. These MIs are shown by the bars in Figure 2(b), where the i-th bundle displays the MIs I(Xi,Xj;Y) for j = i+ 1" " , 15. In order to compute the joint MIs, the MI and the conditional MI is evaluated 0 (n) and O(n2) times respectively. The maximumjoint MI is I(Xi-, Xj-; Y) = 1.201 bits. Generally, we only know I(Xil ' Xii; Y) ~ I(Xi-, Xj-; Y). But in this particular Data Visualization and Feature Selection 691 application, the equality holds. This suggests that sometimes we can use an efficient algorithm with only linear complexity to find the optimal coordinate axis view (Xi·,Xj.). The joint MI also gives other good sets of coordinate axis views with high joint MI values. 3.~ 2 0 0 N '" 'I"llll~ c . ~ 0 ~ ~o 3 3 ~ 33 8 ., .. <> !O x .~ ~ 0 Ii ' /2 h 0 '" 'l' ~ ·20 -,0 ,0 20 -20 20 40 firs, prinopol oomponen' X2 (a) (b) 25 J 3 3 3J3 3 15 1 1 1 ~, 3 3 1 1 1 N " f 1 • §~~ 3 1 , ",' 1 1 1 . ~j;l;, 1 1~~1~~ , 3 1 1 Cl , • 1 ..J 05 /20 , ~ , , 3 3 3 3 § , ", 3 f2 3 3 3 3 3 3 0 , " " 2 a> 3 3 3 3 2 1,1 2~:ai 2~ ~ 3 3 3 3 3 3 3 3 2 2 2 'l' 2 2 3 2 2 3 3 2 2 2 2 ~ -<15 2 2 2 3 3 2 2 2 2 2 2 2 2 3 2 3 3 2 2 2 3 2 2 3 3 2 2 -1 2 2 2 2 3 2 '1 3 2 2 2 2 2 -15 2 2 -6 ~ ·2 2 -2 F.rstLD -3 -2 -1 (c) (d) Figure 3: (a) Data visualization by two principal components; the spatial relation between patterns is not clear. (b) Use the optimal coordinate axis view (Xi., Xj-) found via joint MI to project the radar pulse data; the patterns are well spread to give a better view on the spatial relation between patterns and the boundary between classes. (c) The CCA method. (d) The ICA method. Each bar in Figure 2(b) is associated with a pair of inputs. Those pairs with high joint MI give good coordinate axis view for data visualization. Figure 3 shows that the data visualizations by the maximum JMI and the ICA is better than those by the PCA and the CCA because the data is nongaussian. 4.2 Radar pulse classification Now we train a two layer feed-forward network to classify the radar pulse patterns. Figure 3 shows that it is very difficult to separate the patterns by using just two inputs. We shall use all inputs or four selected inputs. The data set D is divided 692 H H Yang and J. Moody into a training set DI and a test set D2 consisting of 20 percent patterns in D. The network trained on the data set DI using all input variables is denoted by Y = f(XI ,'" ,Xn; WI, W 2 , 0) where WI and W 2 are weight matrices and 0 is a vector of thresholds for the hidden layer. From the data set D, we estimate the mutual information I(Xi; Y) and select i l = arg maxJ(Xi; Y). Given Xii' we estimate the conditional mutual information I(Xj; YIXii ) for j =1= i l . Choose three inputs Xi'J' Xi3 and Xi4 with the largest conditional MI. We found a quartet (iI, i2, i3, i4) = (1,2,3,9). The two-layer feedforward network trained on DI with four selected inputs is denoted by Y = g(XI ,X2 , X 3 , X g; W~, W~, 0'). There are 1365 choices to select 4 input variables out of 15. To set a reference performance for network with four inputs for comparison. Choose 20 quartets from the set Q = {(h,h,h,h): 1 ~ jl < h < h < j4 ~ 15}. For each quartet (h,h,h,j4), a two-layer feed-forward network is trained using inputs (XjllXh,Xh,Xj4)' These networks are denoted by Y = hi(Xil ,Xh , Xh, X j4 ; W~, W~, 0"), i = 1,2"",20 . • •.. .55 .... w..q ER. wlh3)QJnIIa I - - .... ~ER 'd\mcpdltl - - . If'1n11ER lIIIII'I4I1111d1dtnpil Xl. X2,l(3,n:G 1>---+ 1eItIngst.., 4.-..ct .... XI, X2.)(J, MIl xv nini'I EA ............... Xl,X2, lQ, _)fJ ..... ER; .............. Xl.X2. lQ .... XI ..... a. . .., .. ~ ..... Eft ....... ...,. 5 ... • , ~ - - - - - 3 l\ I I 015 .25 \~ 2 I \ .', 5 " \ . , Y , , '7;:Y.1 .. .. ... - ' .1 •• 10 15 25 -(a) (b) Figure 4: (a) The error rates of the network with four inputs (Xl, X 2 , X 3 , Xg) selected by the joint MI are well below the average error rates (with error bars attached) of the 20 networks with different input quartets randomly selected; this shows that the input quartet (X I ,X2,X3,X9 ) is rare but informative. (b) The network with the inputs (X I ,X2,X3,X9 ) converges faster than the network with all inputs. The former uses 65% fewer parameters (weights and thresholds) and 73% fewer inputs than the latter. The classifier with the four best inputs is less expensive to construct and use, in terms of data acquisition costs, training time, and computing costs for real-time application. The mean and the variance of the error rates of the 20 networks are then computed. All networks have seven hidden units. The training and testing error rates of the networks at each epoch are shown in Figure 4, where we see that the network with four inputs selected by the joint MI performs better than the networks with randomly selected input quartets and converges faster than the network with all inputs. The network with fewer inputs is not only faster in computing but also less expensive in data collection. Data Visualization and Feature Selection 693 5 CONCLUSIONS We have proposed data visualization and feature selection methods based on the joint mutual information and ICA. The maximum JMI method can find many good 2-D projections for visualizing high dimensional data which cannot be easily found by the other existing methods. Both the maximum JMI method and the ICA method are very effective for visualizing nongaussian data. The variable selection method based on the JMI is found to be better in eliminating redundancy in the inputs than other methods based on simple mutual information. Input selection methods based on mutual information (MI) have been useful in many applications, but they have two disadvantages. First, they cannot distinguish inputs when all of them have the same MI. Second, they cannot eliminate the redundancy in the inputs when one input is a function of other inputs. In contrast, our new input selection method based on th~ joint MI offers significant advantages in these two aspects. We have successfully applied these methods to visualize radar patterns and to select inputs for a neural network classifier to recognize radar pulses. We found a smaller yet more robust neural network for radar signal analysis using the JMI. Acknowledgement: This research was supported by grant ONR N00014-96-10476. References [1] S. Amari, A. Cichocki, and H. H. Yang. A new learning algorithm for blind signal separation. In Advances in Neural Information Processing Systems, 8, eds. David S. Touretzky, Michael C. Mozer and Michael E. Hasselmo, MIT Press: Cambridge, MA., pages 757-763, 1996. [2] G. Barrows and J. Sciortino. A mutual information measure for feature selection with application to pulse classification. In IEEE Intern. Symposium on TimeFrequency and Time-Scale Analysis, pages 249-253, 1996. [3] R. Battiti. Using mutual information for selecting features in supervised neural net learning. IEEE Trans. on Neural Networks, 5(4):537-550, July 1994. [4] B. Bonnlander. Nonparametric selection of input variables for connectionist learning. Technical report, PhD Thesis. University of Colorado, 1996. [5] C. Jutten and J. Herault. Blind separation of sources, part i: An adaptive algorithm based on neuromimetic architecture. Signal Processing, 24:1-10, 1991. [6] J. Moody. Prediction risk and architecture selection for neural network. In V. Cherkassky, J .H. Friedman, and H. Wechsler, editors, From Statistics to Neural Networks: Theory and Pattern Recognition Applications. NATO ASI Series F, Springer-Verlag, 1994. [7] H. Pi and C. Peterson. Finding the embedding dimension and variable dependencies in time series. Neural Computation, 6:509-520, 1994. [8] H. H. Yang and S. Amari. Adaptive on-line learning algorithms for blind separation: Maximum entropy and minimum mutual information. Neural Computation, 9(7):1457-1482, 1997.
|
1999
|
86
|
1,738
|
An Information-Theoretic Framework for Understanding Saccadic Eye Movements Tai Sing Lee * Department of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 tai@es.emu.edu Abstract Stella X. Yu Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 stella@enbe.emu.edu In this paper, we propose that information maximization can provide a unified framework for understanding saccadic eye movements. In this framework, the mutual information among the cortical representations of the retinal image, the priors constructed from our long term visual experience, and a dynamic short-term internal representation constructed from recent saccades provides a map for guiding eye navigation. By directing the eyes to locations of maximum complexity in neuronal ensemble responses at each step, the automatic saccadic eye movement system greedily collects information about the external world, while modifying the neural representations in the process. This framework attempts to connect several psychological phenomena, such as pop-out and inhibition of return, to long term visual experience and short term working memory. It also provides an interesting perspective on contextual computation and formation of neural representation in the visual system. 1 Introduction When we look at a painting or a visual scene, our eyes move around rapidly and constantly to look at different parts of the scene. Are there rules and principles that govern where the eyes are going to look next at each moment? In this paper, we sketch a theoretical framework based on information maximization to reason about the organization of saccadic eye movements. ·Both authors are members of the Center for the Neural Basis of Cognition - a joint center between University of Pittsburgh and Carnegie Mellon University. Address: Rm 115, Mellon Institute, Carnegie Mellon University, Pittsburgh, PA 15213. lnformation-Theoretic Framework for Understanding Saccadic Behaviors 835 Vision is fundamentally a Bayesian inference process. Given the measurement by the retinas, the brain's memory of eye positions and its prior knowledge of the world, our brain has to make an inference about what is where in the visual scene. The retina, unlike a camera, has a peculiar design. It has a small foveal region dedicated to high-resolution analysis and a large low-resolution peripheral region for monitoring the rest of the visual field. At about 2.5 0 visual angle away from the center of the fovea, visual acuity is already reduced by a half. When we 'look' (foveate) at a certain location in the visual scene, we direct our high-resolution fovea to analyze information in that location, taking a snap shot of the scene using our retina. Figure lA-C illustrate what a retina would see at each fixation. It is immediately obvious that our retinal image is severely limited - it is clear only in the fovea and is very blurry in the surround, posing a severe constraint on the information available to our inference system. Yet, in our subjective experience, the world seems to be stable, coherent and complete in front of us. This is a paradox that have engaged philosophical and scientific debates for ages. To overcome the constraint of the retinal image, during perception, the brain actively moves the eyes around to (1) gather information to construct a mental image of the world, and (2) to make inference about the world based on this mental image. Understanding the forces that drive saccadic eye movements is important to elucidating the principles of active perception. A B C D Figure 1. A-C: retinal images in three separate fixations. D: a mental mosaic created by integrating the retinal images from these three and other three fixations. It is intuitive to think that eye movements are used to gather information. Eye movements have been suggested to provide a means for measuring the allocation of attention or the values of each kind of information in a particular context [16]. The basic assumption of our theory is that we move our eyes around to maximize our information intake from the world, for constructing the mental image and for making inference of the scene. Therefore, the system should always look for and attentively fixate at a location in the retinal image that is the most unusual or the most unexplained - and hence carries the maximum amount of information. 2 Perceptual Representation How can the brain decide which part of the retinal image is more unusual? First of all, we know the responses of VI simple cells, modeled well by the Gabor wavelet pyramid [3,7], can be used to reconstruct completely the retinal image. It is also well established that the receptive fields of these neurons developed in such a way as to provide a compact code for natural images [8,9,13,14]. The idea of compact code or sparse code, originally proposed by Barlow [2], is that early visual neurons capture the statistical correlations in natural scenes so that only a small number 836 T. S. Lee and S. X Yu of cells out of a large set will be activated to represent a particular scene at each moment. Extending this logic, we suggest that the complexity or the entropy of the neuronal ensemble response of a hypercolumn in VI is therefore closely related to the strangeness of the image features being analyzed by the machinery in that hypercolumn. A frequent event will have a more compact representation in the neuronal ensemble response. Entropy is an information measure that captures the complexity or the variability of signals. The entropy of a neuronal ensemble in a hypercolumn can therefore be used to quantify the strangeness of a particular event. A hypercolumn in the visual cortex contains roughly 200,000 neurons, dedicated to analyzing different aspects of the image in its 'visual window'. These cells are tuned to different spatial positions, orientations, spatial frequency, color disparity and other cues. There might also be a certain degree of redundancy, i.e. a number of neurons are tuned to the same feature. Thus a hypercolumn forms the fundamental computational unit for image analysis within a particular window in visual space. Each hypercolumn contains cells with receptive fields of different sizes, many significantly smaller than the aggregated 'visual window' of the hypercolumn. The entropy of a hypercolumn's ensemble response at a certain time t is the sum of entropies of all the channels, given by, H(u(R:;, t)) = - 2: 2:p(u(R:;, v, 0', B, t)) log2P(u(R:;, v, 0', B, t)) 9,<7 V where u(R:;, t) denotes the responses of all complex cell channels inside the visual window R:; of a hypercolumn at time t, computed within a 20 msec time window. u(i, 0', B, t) is the response of a VI complex cell channel of a particular scale 0' and orientation 0' at spatial location i at t. p(u(R:;, v, 0', B, t)) is the probability of cells in that channel within the visual window R:; of the hypercolumn firing v number of spikes. v can be computed as the power modulus of the corresponding simple cell channels, modeled by Gabor wavelets [see 7]. L:v p(u(R:;, v, 0', B, t)) = 1. The probability p(u(R:;, v, 0', B, t)) can be computed at each moment in time because of the variations in spatial position of the receptive fields of similar cell within the hypercolumn - hence the 'same' cells in the hypercolumn are analyzing different image patches, and also because of the redundancy of cells coding similar features. The neurons' responses in a hypercolumn are subject to contextual modulation from other hypercolumns, partly in the form of lateral inhibition from cells with similar tunings. The net observed effect is that the later part of VI neurons' response, starting at about 80 msec, exhibits differential suppression depending on the spatial extent and the nature of the surround stimulus. The more similar the surround stimulus is to the center stimuli, and the larger the spatial extent of the 'similar surround', the stronger is the suppressive effect [e.g. 6]. Simoncelli and Schwartz [15] have proposed that the steady state responses of the cells can be modeled by dividing the response of the cell (i.e. modeled by the wavelet coefficient or its power modulus) by a weighted combination ofthe responses of its spatial neighbors in order to remove the statistical dependencies between the responses of spatial neighbors. These weights are found by minimizing a predictive error between the center signal from the surround signals. In our context, this idea of predictive coding [see also 14] is captured by the concept of mutual information between the ensemble responses of the different hypercolumns as given below, I(u(Rx, t); u(Ox, t - dtd) H(u(R:;, t)) - H(u(Rx, t)lu(Ox, t - dtd) 2: 2: [P(U(R:;,VR,O',B,t),u(O:;,vn,O',B,t)) <7,9 VR,VO I p(u(Rx, VR, 0', B, t), u(Ox, vn, 0', B, t)) ] og2 p(u(R:;, vR,O',B,t)),p(u(O:;,vn, O',B,t)) . Information-Theoretic Framework for Understanding Saccadic Behaviors 837 where u(Rx, t) is the ensemble response of the hypercolumn in question, and u(Ox, t) is the ensemble response of the surrounding hypercolumns. p(u(Rx, VR, (1', (), t)) is the probability that cells of a channel in the center hypercolumn assumes the response value VR and p(u(Ox, VR, (1', (), t)) the probability that cells of a similar channel in the surrounding hypercolumns assuming the response value Vn. tl is the delay by which the surround information exerts its effect on the center hypercolumn. The mutual information I can be computed from the joint probability of ensemble responses of the center and the surround. The steady state responses of the VI neurons, as a result of this contextual modulation, are said to be more correlated to perceptual pop-out than the neurons' initial responses [5,6]. The complexity of the steady state response in the early visual cortex is described by the following conditional entropy, H(u(Rx, t)lu(Ox, t - dtd) = H(u(Rx, t)) - I(u(Rx, t); u(Ox, t - dtd). However, the computation in VI is not limited to the creation of compact representation through surround inhibition. In fact, we have suggested that VI plays an active role in scene interpretation particularly when such inference involves high resolution details [6]. Visual tasks such as the inference of contour and surface likely involve VI heavily. These computations could further modify the steady state responses of VI, and hence the control of saccadic eye movements. 3 Mental Mosaic Representation The perceptual representation provides the basic force for the brain to steer the high resolution fovea to locations of maximum uncertainty or maximum signal complexity. Foveation captures the maximum amount of available information in a location. Once a location is examined by the fovea, its information uncertainty is greatly reduced. The eyes should move on and not to return to the same spot within a certain period of time. This is called the 'inhibition of return'. How can we model this reduction of interest? We propose that the mind creates a mental mosaic of the scene in order to keep track of the information that have been gathered. By mosaic, we mean that the brain can assemble successive retinal images obtained from multiple fixations into a coherent mental picture of the scene. Figure ID provides an example of a mental mosaic created by combining information from the retinal images from 6 fixations. Whether the brain actually keeps such a mental mosaic of the scene is currently under debate. McConkie and Rayner [10] had suggested the idea of an integrative visual buffer to integrate information across multiple saccades. However, numerous experiments demonstrated we actually remember relatively little across saccades [4]. This lead to the idea that brain may not need an explicit internal representation of the world. Since the world is always out there, the brain can access whatever information it needs at the appropriate details by moving the eyes to the appropriate place at the appropriate time. The subjective feeling of a coherent and a complete world in front of us is a mere illusion [e.g. 1]. The mental mosaic represented in Figure ID might resemble McConkie and Rayner's theory superficially. But the existence of such a detailed high-resolution buffer with a large spatial support in the brain is rather biologically implausible. Rather, we think that the information corresponding to the mental mosaic is stored in an interpreted and semantic form in a mesh of Bayesian belief networks in the brain (e.g. involving PO, IT and area 46). This distributed semantic representation of 838 T. S. Lee and S. X Yu the mental mosaic, however, is capable of generating detailed (sometimes false) imagery in early visual cortex using the massive recurrent convergent feedback from the higher areas to VI. However, because of the limited support provided by VI machinery, the instantiation of mental imagery in VI has to be done sequentially one 'retinal image' frame at a time, presumably in conjunction with eye movement, even when the eyes are closed. This might explain why vivid visual dream is always accompanied by rapid eye movement in REM sleep. The mental mosaic accumulates information from the retinal images up to the last fixation and can provide prediction on what the retina will see in the current fixation. For each u(i, (T, 0) cell, there is a corresponding effective prediction signal m(i, (T, 0) fed back from the mental mosaic. This prediction signal can reduce the conditional entropy or complexity of the ensemble response in the perceptual representation by discounting the mutual information between the ensemble response to the retinal image and the mental mosaic prediction as follow, H(u(Rx, t)lm(Rx, t - 6t2)) = H(u(Rx, t)) - I(u(Rx, t), m(Rx, t - dt2)) where 6t2 is the transmission delay from the mental mosaic back to VI. At places where the fovea has visited, the mental mosaic representation has high resolution information and m(i, (T, 0, t - 6t2) can explain u(i, (T, 0, t) fully. Hence, the mutual information is high at those hypercolumns and the conditional entropy H(u(Rx, t)lm(Rx, t - 6t2)) is low, with two consequences: (1) the system will not get the eyes stuck at a particular location; once the information at i is updated to the mental mosaic, the system will lose interest and move on; (2) the system will exhibit 'inhibition of return' as the information in the visited locations are fully predicted by the mental mosaic. Also, from this standpoint, the 'habituation dynamics' often observed in visual neurons when the same stimulus is presented multiple times might not be simply due to neuro-chemical fatigue, but might be understood in terms of mental mosaic being updated and then fed back to explain the perceptual representation in VI. The mental mosaic is in effect our short-term memory of the scene. It has a forgetting dynamics, and needs to be periodically updated. Otherwise, it will rapidly fade away. 4 Overall Reactive Saccadic Behaviors Now, we can combine the influence of the two predictive processes to arrive at a discounted complexity measure of the hypercolumn's ensemble response: H(u(Rx, t)) -I(u(Rx, t); u(Ox, t - 6tt)) -I(u(Rx, t); m(Rx, t - 6t2)) +I(u(Ox, t - 6td; m(Rx, t - 6t2)) If we can assume the long range surround priors and the mental mosaic short term memory are independent processes, we can leave out the last term, I(u(Ox, t 6td; m(Rx, t - 6t2)), of the equation. The system, after each saccade, will evaluate the new retinal scene and select the location where the perceptual representation has the maximum conditional entropy. To maximize the information gain, the system must constantly search for and make a saccade to the locations of maximum uncertainty (or complexity) computed from Information-Theoretic Framework for Understanding Saccadic Behaviors 839 the hypercolumn ensemble responses in VI at each fixation. Unless the number of saccades is severely limited, this locally greedy algorithm, coupled the inhibition of return mechanism, will likely steer the system to a relatively optimal global sampling of the world - in the sense that the average information gain per saccade is maximized, and the mental mosaic's dissonance with the world is minimized. 5 Task-dependent schema Representation However, human eye movements are not simply controlled by the generic information in a bottom-up fashion. Yarbus [16] has shown that, when staring at a face, subjects' eyes tend to go back to the same locations (eyes, mouth) over and over again. Further, he showed that when asked different questions, subjects exhibited different kinds of scan-paths when looking at the same picture. Norton and Stark [12] also showed that eye movements are not random, but often exhibit repetitive or even idiosyncratic path patterns. To capture these ideas, we propose a third representation, called task schema, to provide the necessary top-down information to bias the eye movement control. It specifies the learned or habitual scan-paths for a particular task in a particular context or assigns weights to different types of information. Given that we arenot mostly unconscious of the scan-path patterns we are making, these task-sensitive or context-sensitive habitual scan-patterns might be encoded at the levels of motor programs, and be downloaded when needed without our conscious control. These motor programs for scan-paths can be trained from reinforcement learning. For example, since the eyes and the mouths convey most of the emotional content of a facial expression, a successful interpretation of another person's emotion could provide the reward signal to reinforce the motor programs just executed or the fixations to certain facial features. These unconscious scan-path motor programs could provide the additional modulation to automatic saccadic eye movement generation. 6 Discussion In this paper, we propose that information maximization might provide a theoretical framework to understand the automatic saccadic eye movement behaviors in human. In this proposal, each hypercolumn in V 1 is considered a fundamental computational unit. The relative complexity or entropy of the neuronal ensemble response in the VI hypercolumns, discounted by the predictive effect of the surround, higher order representations and working memory, creates a force field to guide eye navigation. The framework we sketched here bridge natural scene statistics to eye movement control via the more established ideas of sparse coding and predictive coding in neural representation. Information maximization has been suggested to be a possible explanation for shaping the receptive fields in the early visual cortex according to the statistics of natural images [8,9,13,14] to create a minimum-entropy code [2,3]. As a result, a frequent event is represented efficiently with the response of a few neurons in a large set, resulting in a lower hypercolumn ensemble entropy, while unusual events provoke ensemble responses of higher complexity. We suggest that higher complexity in ensemble responses will arouse attention and draw scrutiny by the eyes, forcing the neural representation to continue adapting to the statistics of the natural scenes. The formulation here also suggests that information maximization might provide an explanation for the formation of horizontal predictive network in VI as well as higher order internal representations, consistent with the ideas of predictive coding [11, 14, 15]. Our theory hence predicts that the adaptation of the 840 T. S. Lee and S. X Yu neural representations to the statistics of natural scenes will lead to the adaptation of· saccadic eye movement behaviors. Acknowledgements The authors have been supported by a grant from the McDonnell Foundation and a NSF grant (LIS 9720350). Yu is also being supported in part by a grant to Takeo Kanade. References [1] Ballard, D. Hayhoe, M.M. Pook, P.K. & Rao, RP.N. (1997). Deictic codes for the embodiment of cognition. Behavioral and Brain Science, 20:4, December, 723-767. [2] Barlow, H.B. (1989). Unsupervised learning. Neural Computation, 1, 295-311. [3] Daugman, J.G. (1989). Entropy reduction and decorrelation in visual coding by oriented neural receptive fields. IEEE Transactions on Biomedical Engineering 36:, 107-114. [4] Irwin, D. E, 1991. Information Integration across Saccadic Eye Movements. Cognitive Psychology, 23(3):420-56. [5] Knierim, J. & Van Essen, D.C. Neural response to static texture patterns in area VI of macaque monkey. J. Neurophysiology, 67: 961-980. [6] Lee, T.S., Mumford, D., Romero R & Lamme, V.A.F. (1998). The role of primary visual cortex in higher level vision. Vision Research 38, 2429-2454. [7] Lee, T.S. (1996). Image representation using 2D Gabor wavelets. IEEE Transaction of Pattern Analysis and Machine Intelligence. 18:10, 959-971. [8] Lewicki, M. & Olshausen, B. (1998). Inferring sparse, overcomplete image codes using an efficient coding framework. In Advances in Neural Information Processing System 10, M. Jordan, M. Kearns and S. Solla (eds). MIT Press. [9] Linsker, R (1989). How to generate ordered maps by maximizing the mutual information between input and output signals. Neural Computation, 1: 402-411.0 [10] McConkie, G.W. & Rayner, K. (1976). Identifying the span of effective stimulus in reading. Literature review and theories of reading. In H. Singer and RB. Ruddell (Eds), Theoretical models and processes of reading, 137-162. Newark, D.E.: International Reading Association. [11] Mumford, D. (1992). On the computational architecture of the neocortex II. Biological cybernetics, 66, 241-251. [12] Norton, D. and Stark, 1. (1971) Eye movements and visual perception. Scientific American, 224, 34-43. [13] Olshausen, B.A., & Field, D.J. (1996), Emergence of simple cell receptive field properties by learning a sparse code for natural images. Nature, 381: 607-609. [14] Rao R., & Ballard, D.H. (1999). Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive field effects. Nature Neuroscience, 2:1 7987. [15] Simoncelli, E.P. & Schwartz, O. (1999). Modeling surround suppression in VI neurons with a statistically-derived normalization model. In Advances in Neural Information Processing Systems 11, . M.S. Kearns, S.A. Solla, and D.A. Cohn (eds). MIT Press. [16] Yarbus, A.L. (1967). Eye movements and vision. Plenum Press.
|
1999
|
87
|
1,739
|
Noisy Neural Networks and Generalizations Hava T. Siegelmann Industrial Eng. and Management, Mathematics Technion - lIT Haifa 32000, Israel iehava@ie.technion.ac.il Alexander Roitershtein Mathematics Technion - lIT Haifa 32000, Israel roiterst@math.technion.ac.il Asa Ben-Hur Industrial Eng. and Management Technion - lIT Haifa 32000, Israel asa@tx.technion.ac. il Abstract In this paper we define a probabilistic computational model which generalizes many noisy neural network models, including the recent work of Maass and Sontag [5]. We identify weak ergodicjty as the mechanism responsible for restriction of the computational power of probabilistic models to definite languages, independent of the characteristics of the noise: whether it is discrete or analog, or if it depends on the input or not, and independent of whether the variables are discrete or continuous. We give examples of weakly ergodic models including noisy computational systems with noise depending on the current state and inputs, aggregate models, and computational systems which update in continuous time. 1 Introduction Noisy neural networks were recently examined, e.g. in. [1,4, 5]. It was shown in [5] that Gaussian-like noise reduces the power of analog recurrent neural networks to the class of definite languages, which area strict subset of regular languages. Let E be an arbitrary alphabet. LeE· is called a definite language if for some integer r any two words coinciding on the last r symbols are either both in L or neither in L. The ability of a computational system to recognize only definite languages can be interpreted as saying that the system forgets all its input signals, except for the most recent ones. This property is reminiscent of human short term memory. "Definite probabilistic computational models" have their roots in Rabin's pioneering work on probabilistic automata [9]. He identified a condition on probabilistic automata with a finite state space which restricts them to definite languages. Paz [8] generalized Rabin's condition, applying it to automata with a countable state space, and calling it weak ergodicity [7, 8]. In their ground-breaking paper [5], 336 H. T. Siegelmann, A. Roitershtein and A. Ben-Hur Maass and Sontag extended the principle leading to definite languages to a finite interconnection of continuous-valued neurons. They proved that in the presence of "analog noise" (e.g. Gaussian), recurrent neural networks are limited in their computational power to definite languages. Under a different noise model, Maass and Orponen [4] and Casey [1] showed that such neural networks are reduced in their power to regular languages. In this paper we generalize the condition of weak ergodicity, making it applicable to numerous probabilistic computational machines. In our general probabilistic model, the state space can be arbitrary: it is not constrained to be a finite or infinite set, to be a discrete or non-discrete subset of some Euclidean space, or even to be a metric or topological space. The input alphabet is arbitrary as well (e.g., bits, rationals, reals, etc.). The stochasticity is not necessarily defined via a transition probability function (TPF) as in all the aforementioned probabilistic and noisy models, but through the more general Markov operators acting on measures. Our Markov Computational Systems (MCS's) include as special cases Rabin's actual probabilistic automata with cut-point [9], the quasi-definite automata by Paz [8], and the noisy analog neural network by Maass and Sontag [5]. Interestingly, our model also includes: analog dynamical systems and neural models, which have no underlying deterministic rule but rather update probabilistic ally by using finite memory; neural networks with an unbounded number of components; networks of variable dimension (e.g., "recruiting networks"); hybrid systems that combine discrete and continuous variables; stochastic cellular automata; and stochastic coupled map lattices. We prove that all weakly ergodic Markov systems are stable, i.e. are robust with respect to architectural imprecisions and environmental noise. This property is desirable for both biological and artificial neural networks. This robustness was known up to now only for the classical discrete probabilistic automata [8, 9]. To enable practicality and ease in deciding weak ergodicity for given systems, we provide two conditions on the transition probability functions under which the associated computational system becomes weakly ergodic. One condition is based on a version of Doeblin's condition [5] while the second is motivated by the theory of scrambling matrices [7, 8]. In addition we construct various examples of weakly ergodic systems which include synchronous or asynchronous computational systems, and hybrid continuous and discrete time systems. 2 Markov Computational System (MCS) Instead of describing various types of noisy neural network models or stochastic dynamical systems we define a general abstract probabilistic model. When dealing with systems containing inherent elements of uncertainty (e.g., noise) we abandon the study of individual trajectories in favor of an examination of the flow of state distributions. The noise models we consider are homogeneous in time, in that they may depend on the input, but do not depend on time. The dynamics we consider is defined by operators acting in the space of measures, and are called Markov operators [6]. In the following we define the concepts which are required for such an approach. Let E be an arbitrary alphabet and ° be an abstract state space. We assume that a O'-algebra B (not necessarily Borel sets) of subsets of ° is given, thus (0, B) is a measurable space. Let us denote by P the set of probability measures on (0, B). This set is called a distribution space. Let E be a space of finite measures on (0, B) with the total variation norm defined Noisy Neural Networks and Generalizations 337 by Ilpllt = 11-'1(0) = sup I-'(A) inf I-'(A). AEB AEB (1) Denote by C the set of all bounded linear operators acting from £ to itself. The 1I'lh- norm on £ induces a norm IIPlh = sUPJjE'P IIPI-'III in C. An operator P E C is said to be a Markov operator if for any probability measure I-' E P, the image PI-' is again a probability measure. For a Markov operator, IIPIII = 1. Definition 2.1 A Markov system is a set of Markov operators T = {Pu : u E E}. With any Markov system T, one can associate a probabilistic computational system. If the probability distribution on the initial states is given by the probability measure Po, then the distribution of states after n computational steps on inputs W = Wo, WI, ... , W n , is defined as in [5, 8] Pwl-'o(A) = PWn •• • •• Pw1Pwol-'0. (2) Let A and R be two subset of P with the property of having a p-gap dist(A, R) = inf III-' - viii = P > 0 JjEA,IIE'R (3) The first set is called a set of accepting distributions and the second is called a set of rejecting distributions. A language L E E* is said to be recognized by Markov computational system M = (£, A, R, E, 1-'0, T) if W E L {:::} Pwl-'o E A W rt. L, {:::} PwPo E R. This model of language recognition with a gap between accepting and rejecting spaces agrees with Rabin's model of probabilistic automata with isolated cut-point [9] and the model of analog probabilistic computation [4, 5]. An example of a Markov system is a system of operators defined by TPF on (0, B). Let Pu (x, A) be the probability of moving from a state x to the set of states A upon receiving the input signal u E E. The function Pu(x,') is a probability measure for all x E 0 and PuC A) is a measurable function of x for any A E B. In this case, Pup(A) are defined by (4) 3 Weakly Ergodic MCS Let P E £, be a Markov operator. The real number J'(P) = 1 - ! sUPJj,I/E'P IIPp Pvlll is called the ergodicity coefficient of the Markov operator. We denote J(P) = 1 - J'(P) . It can be proven that for any two Markov operators P1,P2, J(PI P2) :S J(Pt}J(P2)' The ergodicity coefficient was introduced by Dobrushin [2] for the particular case of Markov operators induced by TPF P (x, A). In this special case J'(P) = 1- SUPx,ySUPA IP(x, A) - P(y,A)I · Weakly ergodic systems were introduced and studied by paz in the particular case of a denumerable state space 0, where Markov operators are represented by infinite dimensional matrices. The following definition makes no assumption on the associated measurable space. Definition 3.1 A Markov system {Pu , U E E} is called weakly ergodic if for any a > 0, there is an integer r = r( a) such that for any W E E~r and any 1-', v E P, 1 J(Pw) = "2llPwl-' - Pwvlh :S a. (5) 338 H. T. Siege/mann, A. ROitershtein and A. Ben-Hur An MeS M is called weakly ergodic if its associated Markov system {Pu , u E E} is weakly ergodic. • An MeS M is weakly ergodic if and only if there is an integer r and real number a < 1, such that IlPwJ.l - Pwvlh ::; a for any word w of length r. Our most general characterization of weak ergodicity is as follows: [11]: Theorem 1 An abstract MCS M is weakly ergodic if and only if there exists a multiplicative operator's norm II . 11** on C equivalent to the norm II . liB sUPP,:Ml=O} 1I1~11!1 , and such that SUPUE~ I lPu lIu ::; € for some number € < 1. • The next theorem connects the computational power of weakly ergodic MeS's with the class of definite languages, generalizing the results by Rabin [9], Paz [8, p. 175], and Maass and Sontag [5]. Theorem 2 Let M be a weakly ergodic MCS. If a language L can be recognized by M, then it is definite. • 4 The Stability Theorem of Weakly Ergodic MCS An important issue for any computational system is whether the machine is robust with respect to small perturbations of the system's parameters or under some external noise. The stability of language recognition by weakly ergodic MeS's under perturbations of their Markov operators was previously considered by Rabin [9] and Paz [7,8]. We next state a general version ofthe stability theorem that is applicable to our wide notion of weakly ergodic systems. We first define two MeS's M and M to be similar if they share the same measurable space (0,8), alphabet E, and sets A and 'fl, and if they differ only by their associated Markov operators. Theorem 3 Let M and M be two similar MCS's such that the first is weakly ergodic. Then there is a > 0, such that if IlPu - 1\lh ::; a for all u E E, then the second is also weakly ergodic. Moreover, these two MCS's recognize exactly the same class of languages. • Corollary 3.1 Let M and M be two similar MCS's. Suppose that the first is weakly ergodic. Then there exists f3 > 0, such that ifsuPAEB IPu(x, A) -.Pu(x, A)I ::; f3 for all u E E, x E 0, the second is also weakly ergodic. Moreover, these two MCS's recognize exactly the same class of languages. • A mathematically deeper result which implies Theorem 3 was proven in [11]: Theorem 4 Let M and M be two similar MCS's, such that the first is weakly ergodic and the second is arbitrary. Then, for any a > 0 there exists € > 0 such that IlPu -1\lh ::; € for all u E E implies IIPw - .Pw111 ::; a for all words wE E* .• Theorem 3 follows from Theorem 4. To see this, one can chose any a < p in Theorem 4 and obser~ that IlPw - .Pwlh ::; a < p implies that the word w is accepted or rejected by M in accordance to whether it is accepted or rejected by M. Noisy Neural Networks and Generalizations 339 5 Conditions on the Transition Probabilities This section discusses practical conditions for weakly ergodic MCS's in which the Markov operators Pu are induced by transition probability functions as in (4). Clearly, a simple sufficient condition for an MCS to be weakly ergodic is given by sUPUEE d(Pu ) ~ 1 - c, for some c> o. Maass and Sontag used Doeblin's condition to prove the computational power of noisy neural networks [5]. Although the networks in [5] constitute a very particular case of weakly ergodic MCS's, Doeblin's condition is applicable also to our general model. The following version of Doeblin's condition was given by Doob [3]: Definition 5.1 [3] Let P(x, A) be a TPF on (0,8). We say that it satisfies Doeblin condition, D~, if there exists a constant c and a probability measure p on (0,8) such that pn(x,A) ~ cp(A) for any set A E 8. • If an MCS M is weakly ergodic, then all its associated TPF Pw (x, A), wEE must satisfy Do for some n = n(w). Doop has proved [3, p. 197] that if P(x,A) satisfies Doeblin's condition D~ with constant c, then for any p, II E P, IIPp Plliit ~ (1 - c)llp - 11111, i.e., d(P) ~ 1- c. This leads us to the following definition. Definition 5.2 Let M be an MCS. We say that the space 0 is small with respect to M if there exists an m > 0 such that all associated TPF P w (x, A), w E Em satisfy Doeblin's condition D~ uniformly with the same constant c, i.e., Pw (x, A) ~ cpw (A), wE Em. • The following theorem strengthens the result by Maass and Sontag [5]. Theorem 5 Let M be an MCS. If the space 0 is small with respect to M, then M is weakly ergodic, and it can recognize only definite languages. • This theorem provides a convenient method for checking weak ergodicity in a given TPF. The theorem implies that it is sufficient to execute the following simple check: choose any integer n, and then verify that for every state x and all input strings wEEn, the "absolutely continuous" part of all TPF Pw, wEEn is uniformly bounded from below: (6) where Pw(x, y) is the density of the absolutely continuous component of Pw(x,·) with respect to 'l/Jw, and C1, C2 are positive numbers. Most practical systems can be defined by null preserving TPF (including for example the systems in [5]). For these systems we provide (Theorem 6) a sufficient and necessary condition in terms of density kernels. A TPF Pu(x, A), u E E is called null preserving with respect to a probability measure pEP if it has a density with respect to p i.e., P(x,A) = IAPu(x,z)p(dz). It is not hard to see, that the property of null preserving per letter u E E implies that all TPF Pw(x, A) of words w E E* are null preserving as well. In this case d(Pu) = 1 - infx,y In min{pu(x, z),pu(y, z)}Pu(dz) and we have: Theorem 6 Let M be an MCS defined by null preserving transition probability functions Pu, u E E. Then, M is weakly ergodic if and only if there exists n such that infwEE" infx,y In min{pu(x, z),pu(y, z)}Pu(dz) > o. • A similar result was previously established by paz [7, 8] for the case of a denumerable state space O. This theorem allows to treat examples which are not covered by 340 H T. Siegelmann, A. ROitershtein and A. Ben-Hur Theorem 5. For example, suppose that the space 0 is not small with respect to an MCS M, but for some n and any wEEn there exists a measure 1/Jw on (0, B) with the property that for any couple of states x, yEO 1/Jw ({z : min{pw(x, z),Pw(y, z)} ~ cd) ~ C2 , (7) where Pw(x, y) is the density of Pw(x,·) with respect to 1/Jw, and Cl,C2 are positive numbers. This condition may occur even ifthere is no y such that Pu(x, y) S; Cl for all x E O. 6 Examples of Weakly Ergodic Systems 1. The Synchronous Parallel Model Let (Oi , Bi ), i = 1,2, ... , N be a collection of measurable sets. Define ni = TIj # nj and Hi = TIj # Bj. Then (ni , Bi) are measurable spaces. Define also Ei = E x ni , and 11 = {Pxl,u (Xi , Ai) : (xi, u) E Ed be given stochastic kernels. Each set 11 defines an MCS Mi. We can define an aggregate MCS by setting n TIi Oi, B = TIi Bi , S = TIi Si , R = TIi Ri, and (8) This describes a model of N noisy computational systems that update in synchronous parallelism. The state of the whole aggregate is a vector of states of the individual components, and each receives the states of all other components as part of its input. Theorem 7 [12] Let M be an MCS defined by equation (8). It is weakly ergodic if at least one set of operators T is such that <5(P~,xl) S; 1- C for any u E E, xi E ni and some positive number c. • 2. The Asynchronous Parallel Model In this model, at every step only one component is activated. Suppose that a collection of N similar MCS's M i, i = 1, ... , N is given. Consider a probability measure e = {fl," ., eN} on the set K = {I, ... , N} . Assume that in each computational step only one MCS is activated. The current state of the whole aggregate is represented by the state of its active component. Assume also that the probability of a computational system Mi to be activated, is time-independent and is given by Prob(Md = ei. The aggregate system is then described by stochastic kernels N Pu(x, A) = LeiP~(x , A) . (9) i=l Theorem 8 [12] Let M be an MCS defined by formula (9). It is weakly ergodic if at least one set of operators {PJ} , ... , {Pt'} is weakly ergodic. • 3. Hybrid Weakly Ergodic Systems We now present a hybrid weakly ergodic computational system consisting of both continuous and discrete elements. The evolution of the system is governed by a differential equation, while its input arrives at discrete times. Let n = ffin , and consider a collection of differential equations Xu(s) = 1/Ju(xu(s)), u E E, s E [0,00). (10) Noisy Neural Networks and Generalizations 341 Suppose that 1/Ju (x) is sufficiently smooth to ensure the existence and uniqueness of solutions of Equation (10) for s E [0,1] and for any initial condition. Consider a computational system which receives an input u(t) at discrete times to, t l , t 2 .... In the interval t E [ti, ti+d the behavior of the system is described by Equation (10), where s = t-tj. A random initial condition for the time tn is defined by (11) where Xu (t,,_d(l) is the state of the system after previously completed computations, and Pu (x, A) , u E E is a family of stochastic kernels on 0 x 8. This describes a system which receives inputs in discrete instants of time; the input letters u E E cause random perturbations of the state Xu (t-l)(I) governed by the transition probability functions pu(t)(xu(t-l), A). In all other times the system is a noise-free continuous computational system which evolves according to equation (10). Let 0 = IRn , Xo E 0 be a distinguished initial state, and let Sand R be two subsets of 0 with the property of having a p-gap: dist(S, R) = infxEs,YER Ilx - yll = p > O. The first set is called a set of accepting final states and the second is called a set of reJ'ecting final states. We say that the hybrid computational system M = (0, E, xo, 1/Ju, S, R) recognizes L ~ E* if for all w = WO ... Wn E E* and the end letter $ tj. E the following holds: W E L ¢} Prob(xw"s(l) E S) > ~ + c, and W tj. L ¢} Prob(xw"s(l) E R) > ~ + c. Theorem 9 [12} Let M be a hybrid computational system. It is weakly ergodic if its set of evolution operators T = {Pu : u E E} is weakly ergodic. • References [1] Casey, M., The Dynamics of Discrete-Time Computation, With Application to Recurrent Neural Networks and Finite State Machine Extraction, Neural Computation 8, 1135-1178, 1996. [2] Dobrushin, R. L., Central limit theorem for nonstationary Markov chains I, II. Theor. Probability Appl. vol. 1, 1956, pp 65-80, 298-383. [3] Doob J. L., Stochastic Processes. John Wiley and Sons, Inc., 1953. [4] W. Maass and Orponen, P., On the effect of analog noise in discrete time computation, Neural Computation, 10(5), 1998, pp. 1071-1095. [5] W. Maass and Sontag, E., Analog neural nets with Gaussian or other common noise distribution cannot recognize arbitrary regular languages, Neural Computation, 11, 1999, pp. 771-782. [6] Neveu J., Mathematical Foundations of the Calculus of Probability. Holden Day, San Francisco, 1964. [7] Paz A., Ergodic theorems for infinite probabilistic tables. Ann. Math. Statist. vol. 41, 1970, pp. 539-550. [8] Paz A., Introduction to Probabilistic Automata. Academic Press, Inc., London, 1971. [9] Rabin, M., Probabilistic automata, Information and Control, vol 6, 1963, pp. 230-245. [10] Siegelmann H. T., Neural Networks and Analog Computation: Beyond the Turing Limit. Birkhauser, Boston, 1999. [11] Siegelmann H. T . and Roitershtein A., On weakly ergodic computational systems, 1999, submitted. [12] Siegelmann H. T., Roitershtein A., and Ben-Hur, A., On noisy computational systems, 1999, Discrete Applied Mathematics, accepted.
|
1999
|
88
|
1,740
|
The Nonnegative Boltzmann Machine Oliver B. Downs Hopfield Group Schultz Building Princeton University Princeton, NJ 08544 obdowns@princeton.edu David J.e. MacKay Cavendish Laboratory Madingley Road Cambridge, CB3 OHE United Kingdom mackay@mrao.cam.ac.uk Daniel D. Lee Bell Laboratories Lucent Technologies 700 Mountain Ave. Murray Hill, NJ 07974 ddlee@bell-labs.com Abstract The nonnegative Boltzmann machine (NNBM) is a recurrent neural network model that can describe multimodal nonnegative data. Application of maximum likelihood estimation to this model gives a learning rule that is analogous to the binary Boltzmann machine. We examine the utility of the mean field approximation for the NNBM, and describe how Monte Carlo sampling techniques can be used to learn its parameters. Reflective slice sampling is particularly well-suited for this distribution, and can efficiently be implemented to sample the distribution. We illustrate learning of the NNBM on a transiationally invariant distribution, as well as on a generative model for images of human faces. Introduction The multivariate Gaussian is the most elementary distribution used to model generic data. It represents the maximum entropy distribution under the constraint that the mean and covariance matrix of the distribution match that of the data. For the case of binary data, the maximum entropy distribution that matches the first and second order statistics of the data is given by the Boltzmann machine [1]. The probability of a particular state in the Boltzmann machine is given by the exponential form: P({Si = ±1}) = ~ exp (-~ L.siAijSj + ~bi Si) . tJ t (1) Interpreting Eq. 1 as a neural network, the parameters A ij represent symmetric, recurrent weights between the different units in the network, and bi represent local biases. Unfortunately, these parameters are not simply related to the observed mean and covariance of the The Nonnegative Boltzmann Machine (a) 40 30 20 o 429 (b) 5.-------------~ 1 2 3 4 5 Figure 1: a) Probability density and b) shaded contour plot of a two dimensional competitive NNBM distribution. The energy function E (x) for this distribution contains a saddle point and two local minima, which generates the observed multimodal distribution. data as they are for the normal Gaussian. Instead, they need to be adapted using an iterative learning rule that involves difficult sampling from the binary distribution [2]. The Boltzmann machine can also be generalized to continuous and nonnegative variables. In this case, the maximum entropy distribution for nonnegative data with known first and second order statistics is described by a distribution previously called the "rectified Gaussian" distribution [3]: p(x) = {texP[-E(X)] if Xi 2:: O'v'i, o if any Xi <0, where the energy function E (x) and normalization constant Z are: E(x) Z _ ~xT Ax -bTx 2 ' r dx exp[-E(x)]. Il:?o (2) (3) (4) The properties of this nonnegative Boltzmann machine (NNBM) distribution differ quite substantially from that of the normal Gaussian. In particular, the presence of the nonnegativity constraints allows the distribution to have multiple modes. For example, Fig. 1 shows a two-dimensional NNBM distribution with two separate maxima located against the rectifying axes. Such a multimodal distribution would be poorly modelled by a single normal Gaussian. In this submission, we discuss how a multimodal NNBM distribution can be learned from nonnegative data. We show the limitations of mean field approximations for this distribution, and illustrate how recent developments in efficient sampling techniques for continuous belief networks can be used to tune the weights of the network [4]. Specific examples of learning are demonstrated on a translationally invariant distribution, as well as on a generative model for face images. Maximum Likelihood The learning rule for the NNBM can be derived by maximizing the log likelihood of the observed data under Eq. 2. Given a set of nonnegative vectors {xJt }, where J-L = l..M 430 0. B. Downs, D. J. MacKay and D. D. Lee indexes the different examples, the log likelihood is: 1 M 1 M L= M LlogP(xJL ) = - M LE(xJL) -logZ. Jl=l JL=l (5) Taking the derivatives ofEq. 5 with respect to the parameters A and b gives: aL (6) (7) where the subscript "c" denotes a "clamped" average over the data, and the subscript "f" denotes a "free" average over the NNBM distribution: M ~ Lf(xJL) JL=l (f(x))c (8) (f(x))r = 1"20 dx P(x)f(x). (9) These derivatives are used to define a gradient ascent learning rule for the NNBM that is similar to that of the binary Boltzmann machine. The contrast between the clamped and free covariance matrix is used to update the iteractions A, while the difference between the clamped and free means is used to update the local biases b. Mean field approximation The major difficulty with this learning algorithm lies in evaluating the averages (XiXj)f and (Xi)r. Because it is analytically intractable to calculate these free averages exactly, approximations are necessary for learning. Mean field approximations have previously been proposed as a deterministic alternative for learning in the binary Boltzmann machine, although there have been contrasting views on their validity [5,6]. Here, we investigate the utility of mean field theory for approximating the NNBM distribution. The mean field equations are derived by approximating the NNBM distribution in Eq. 2 with the factorized form: 1 1 (X.)'Y !Ei Q(x) = II Q1';(Xi) = II -.-2. e-1';, . . I! 'Ti 'Ti ~ ~ (10) where the different marginal densities Q(Xi) are characterized by the means 'Ti with a fixed constant I' The product of I-distributions is the natural factorizable distribution for nonnegative random variables. The optimal mean field parameterS'Ti are determined by minimizing the Kullback-Leibler divergence between the NNBM distribution and the factorized distribution: J [Q(X)] DKL(QIIP) = dx Q(x) log P(x) = (E(x))Q(x) + log Z - H(Q). (11) Finding the minimum of Eq. 11 by setting its derivatives with respect to the mean field parameters 'Ti to zero gives the simple mean field equations: A;m = h + 1) [bi - ~ Ai;T; + ~i] (12) The Nonnegative Boltzmann Machine 431 (a) (b) Figure 2: a) Slice sampling in one dimension. Given the current sample point, Xi, a height y E [0, aP(x)] is randomly chosen. This defines a slice (x E SlaP(x) ~ y) in which a new Xi+! is chosen. b) For a multidimensional slice S, the new point Xi+l is chosen using ballistic dynamics with specular reflections off the interior boundaries of the slice. These equations can then be solved self-consistently for Ti. The "free" statistics of the NNBM are then replaced by their statistics under the factorized distribution Q (x): (Xi}r ~ Ti, (XiXj}r ~ [h + 1)2 + (r + 1) 8ij] TiTj. (13) The fidelity of this approximation is determined by how well the factorized distribution Q(x) models the NNBM distribution. Unfortunately, for distributions such as the one shown in Fig. 3, the mean field approximation is quite different from that of the true multimodal NNBM distribution. This suggests that the naive mean field approximation is inadequate for learning in the NNBM, and in fact attempts to use this approximation fail to learn the examples given in following sections. However, the mean field approximation can still be used to initialize the parameters to reasonable values before using the sampling techniques that are described below. Monte-Carlo sampling A more direct approach to calculating the "free" averages in Eq. 6-7 is to numerically approximate them. This can be accomplished by using Monte Carlo sampling to generate a representative set of points that sufficiently approximate the statistics of the continuous distribution. In particular, Markov chain Monte-Carlo methods employ an iterative stochastic dynamics whose equilibrium distribution converges to that of the desired distribution [4]. For the binary Boltzmann machine, such sampling dynamics involves random "spin flips" which change the value of a single binary component. Unfortunately, these single component dynamics are easily caught in local energy minima, and can converge very slowly for large systems. This makes sampling the binary distribution very difficult, and more specialized computational techniques such as simulated annealing, cluster updates, etc., have been developed to try to circumvent this problem. For the NNBM, the use of continuous variables makes it possible to investigate different stochastic dynamics in order to more efficiently sample the distribution. We first experimented with Gibbs sampling with ordered overrelaxation [7], but found that the required inversion of the error function was too computationally expensive. Instead, the recently developed method of slice sampling [8] seems particularly well-suited for implementation in the NNBM. The basic idea of the slice sampling algorithm is shown in Fig. 2. Given a sample point Xi, a random y E [0, aP(xi)] is first uniformly chosen. Then a slice S is defined as the connected set of points (x E S I aP(x) ~ y), and the new point Xi+l E S is chosen 432 0. B. Downs, D. J. MacKay and D. D. Lee (b) 4 1 2 3 4 5 2 3 4 5 Figure 3: Contours of the two-dimensional competitive NNBM distribution overlaid by a) 'Y = 1 mean field approximation and b) 500 reflected slice samples. randomly from this slice. The distribution of Xn for large n can be shown to converge to the desired density P(x). Now, for the NNBM, solving the boundary points along a particular direction in a given slice is quite simple, since it only involves solving the roots of a quadratic equation. In order to efficiently choose a new point within a particular slice, reflective "billiard ball" dynamics are used. A random initial velocity is chosen, and the new point is evolved by travelling a certain distance from the current point while specularly reflecting from the boundaries of the slice. Intuitively, the reversibility of these reflections allows the dynamics to satisfy detailed balance. In Fig. 3, the mean field approximation and reflective slice sampling are used to model the two-dimensional competitive NNBM distribution. The poor fit of the mean field approximation is apparent from the unimodality of the factorized density, while the sample points from the reflective slice sampling algorithm are more representative of the underlying NNBM distribution. For higher dimensional data, the mean field approximation becomes progressively worse. It is therefore necessary to implement the numerical slice sampling algorithm in order to accurately approximate the NNBM distribution. Translationally invariant model Ben-Yishai et al. have proposed a model for orientation tuning in primary visual cortex that can be interpreted as a cooperative NNBM distribution [9]. In the absence of visual input, the firing rates of N cortical neurons are described as minimizing the energy function E (x) with parameters: 1 € 27r 8ij + N - N cos( N Ii - jl) (14) 1 This distribution was used to test the NNBM learning algorithm. First, a large set of N = 25 dimensional nonnegative training vectors were generated by sampling the distribution with (3 = 50 and € = 4. Using these samples as training data, the A and b parameters were learned from a unimodal initialization by evolving the training vectors using reflective slice sampling, and these evolved vectors were used to calculate the "free" averages in Eq. 6-7. The A and b estimates were then updated, and this procedure was iterated until the evolved averages matched that of the training data. The learned A and b parameters were then found to almost exactly match the original form in Eq. 14. Some representative samples from the learned NNBM distribution are shown in Fig. 4. The Nonnegative Boltzmann Machine 433 3 2 5 10 15 20 25 Figure 4: Representative samples taken from a NNBM after training to learn a translationally invariant cooperative distribution with (3 = 50 and € = 4. b) Figure 5: a) Morphing of a face image by successive sampling from the learned NNBM distribution. b) Samples generated from a normal Gaussian. Generative model for faces We have also used the NNBM to learn a generative model for images of human faces. The NNBM is used to model the correlations in the coefficients of the nonnegative matrix factorization (NMF) of the face images [10]. NMF reduces the dimensionality of nonnegative data by decomposing the face images into parts correponding to eyes, noses, ears, etc. Since the different parts are coactivated in reconstructing a face, the activations of these parts contain significant correlations that need to be captured by a generative model. Here we briefly demonstrate how the NNBM is able to learn these correlations. Sampling from the NNBM stochastically generates coefficients which can graphically be displayed as face images. Fig. 5 shows some representative face images as the reflective slice sampling dynamics evolves the coefficients. Also displayed in the figure are the analogous images generated if a normal Gaussian is used to model the correlations instead. It is clear that the nonnegativity constraints and multimodal nature of the NNBM results in samples which are cleaner and more distinct as faces. 434 O. B. Downs, D. J. MacKay and D. D. Lee Discussion Here we have introduced the NNBM as a recurrent neural network model that is able to describe multimodal nonnegative data. Its application is made practical by the efficiency of the slice sampling Monte Carlo method. The learning algorithm incorporates numerical sampling from the NNBM distribution and is able to learn from observations of nonnegative data. We have demonstrated the application of NNBM learning to a cooperative, translationally invariant distribution, as well as to real data from images of human faces. Extensions to the present work include incorporating hidden units into the recurrent network. The addition of hidden units implies modelling certain higher order statistics in the data, and requires calculating averages over these hidden units. We anticipate the marginal distribution over these units to be most commonly unimodal, and hence mean field theory should be valid for approximating these averages. Another possible extension involves generalizing the NNBM to model continuous data confined within a certain range, i.e. 0 :s; Xi :s; 1. In this situation, slice sampling techniques would also be used to efficiently generate representative samples. In any case, we hope that this work stimulates more research into using these types of recurrent neural networks to model complex, multimodal data. Acknowledgements The authors acknowledge useful discussion with John Hopfield, Sebastian Seung, Nicholas Socci, and Gayle Wittenberg, and are indebted to Haim Sompolinsky for pointing out the maximum entropy interpretation of the Boltzmann machine. This work was funded by Bell Laboratories, Lucent Technologies. O.B. Downs is grateful for the moral support, and open ears and minds of Beth Brittle, Gunther Lenz, and Sandra Scheitz. References [1] Hinton, GE & Sejnowski, TJ (1983). Optimal perceptual learning. IEEE Conference on Computer Vision and Pattern Recognition, Washington, DC, 448-453. [2] Ackley, DH, Hinton, GE, & Sejnowski, TJ (1985). A learning algorithm for Boltzmann machines. Cognitive Science 9, 147-169. [3] Socci, ND, Lee, DD, and Seung, HS (1998). The rectified Gaussian distribution. Advances in Neural Information Processing Systems 10, 350-356. [4] MacKay, DJC (1998). Introduction to Monte Carlo Methods. Learning in Graphical Models. Kluwer Academic Press, NATO Science Series, 175-204. [5] Galland, CC (1993). The limitations of deterministic Boltzmann machine learning. Network 4, 355-380. [6] Kappen, HJ & Rodriguez, FB (1997). Mean field approach to learning in Boltzmann machines. Pattern Recognition in Practice Jij, Amsterdam. [7] Neal, RM (1995). Suppressing random walks in Markov chain Monte Carlo using ordered overrelaxation. Technical Report 9508, Dept. of Statistics, University of Toronto. [8] Neal, RM (1997). Markov chain Monte Carlo methods based on "slicing" the density function. Technical Report 9722, Dept. of Statistics, University of Toronto. [9] Ben-Yishai, R, Bar-Or, RL, & Sompolinsky, H (1995). Theory of orientation tuning in visual cortex. Proc. Nat. Acad. Sci. USA 92, 3844-3848. [10] Lee, DD, and Seung, HS (1999) Learning the parts of objects by non-negative matrix factorization. Nature 401,788-791.
|
1999
|
89
|
1,741
|
Reinforcement Learning Using Approximate Belief States Andres Rodriguez * Artificial Intelligence Center SRI International 333 Ravenswood Avenue, Menlo Park, CA 94025 rodriguez@ai.sri.com Abstract Ronald Parr, Daphne Koller Computer Science Department Stanford University Stanford, CA 94305 {parr,koller}@cs.stanford.edu The problem of developing good policies for partially observable Markov decision problems (POMDPs) remains one of the most challenging areas of research in stochastic planning. One line of research in this area involves the use of reinforcement learning with belief states, probability distributions over the underlying model states. This is a promising method for small problems, but its application is limited by the intractability of computing or representing a full belief state for large problems. Recent work shows that, in many settings, we can maintain an approximate belief state, which is fairly close to the true belief state. In particular, great success has been shown with approximate belief states that marginalize out correlations between state variables. In this paper, we investigate two methods of full belief state reinforcement learning and one novel method for reinforcement learning using factored approximate belief states. We compare the performance of these algorithms on several well-known problem from the literature. Our results demonstrate the importance of approximate belief state representations for large problems. 1 Introduction The Markov Decision Processes (MDP) framework [2] is a good way of mathematically formalizing a large class of sequential decision problems involving an agent that is interacting with an environment. Generally, an MDP is defined in such a way that the agent has complete knowledge of the underlying state of the environment. While this formulation poses very challenging research problems, it is still a very optimistic modeling assumption that is rarely realized in the real world. Most of the time, an agent must face uncertainty or incompleteness in the information available to it. An extension of this formalism that generalizes MDPs to deal with this uncertainty is given by partially observable Markov Decision Processes (POMDPs) [1, 11] which are the focus of this paper. Solving a POMDP means finding an optimal behavior policy 7l'*, that maps from the agent's available knowledge of the environment, its belief state, to actions. This is usually done through a function, V, that assigns values to belief states. In the fully observable (MDP) "The work presented in this paper was done while the first author was at Stanford University. Reinforcement Learning Using Approximate Belief States 1037 case, a value function can be computed efficiently for reasonably sized domains. The situation is somewhat different for POMDPs, where finding the optimal policy is PSPACEhard in the number of underlying states [6]. To date, the best known exact algorithms to solve POMDPs are taxed by problems with a few dozen states [5]. There are several general approaches to approximating POMDP value functions using reinforcement learning methods and space does not permit a full review of them. The approach upon which we focus is the use of a belief state as a probability distribution over underlying model states. This is in contrast to methods that manipulate augmented state descriptions with finite memory [9, 12] and methods that work directly with observations [8]. The main advantage of a probability distribution is that it summarizes all of the information necessary to make optimal decisions [1]. The main disadvantages are that a model is required to compute a belief state, and that the task of representing and updating belief states in large problems is itself very difficult. In this paper, we do not address the problem of obtaining a model; our focus is on the the most effective way of using a model. Even with a known model, reinforcement learning techniques can be quite competitive with exact methods for solving POMDPs [lO]. Hence, we focus on extending the model-based reinforcement learning approach to larger problems through the use of approximate belief states. There are risks to such an approach: inaccuracies introduced by belief state approximation could give an agent a hopelessly inaccurate perception of its relationship to the environment. Recent work [4], however, presents an approximate tracking approach, and provides theoretical guarantees that the result of this process cannot stray too far from the exact belief state. In this approach, rather than maintaining an exact belief state, which is infeasible in most realistically large problems, we maintain an approximate belief state, usually from some restricted class of distributions. As the approximate belief state is updated (due to actions and observations), it is continuously projected back down into this restricted class. Specifically, we use decomposed belief states, where certain correlations between state variables are ignored. In this paper we present empirical results comparing three approaches to belief state reinforcement learning. The most direct approach is the use of a neural network with one input for each element of the full belief state. The second is the SPOVA method [lO], which uses a function approximator designed for POMDPs and the third is the use of a neural network with an approximate belief state as input. We present results for several well-known problems in the POMDP literature, demonstrating that while belief state approximation is ill-suited for some problems, it is an effective means of attacking large problems. 2 Basic Framework and Algorithms A POMDP is defined as a tuple < S, A, 0, T, R, 0 > of three sets and three functions. S is a set of states, A is a set of actions and ° is a set of observations. The transition function T : S x A ~ II( S) specifies how the actions affect the state of the world. It can be viewed as T( Si, a, S j) = P( S j la, sd, the probability that the agent reaches state S j if it currently is in state Si and takes action a. The reward function R : S x A ~ 1R determines the immediate reward received by the agent The observation model 0 : S x A ~ II( 0) determines what the agent perceives, depending on the environment state and the action taken. O(s, a, 0) = P( ola, s) is the probability that the agent observes 0 when it is in state s, having taken the action a. 1038 A. Rodriguez, R. Parr and D. Koller 2.1 POMDP belief states A beliefstate, b, is defined as a probability distribution over all states S E S, where b(s), represents probability that the environment is in state s. After taking action a and observing 0, the belief state is updated using Bayes rule: 1 1 1 O(S', a, 0) L.SES T(Si, a, s')b(sd b (s ) = P( s I a, 0, b) = =---::::-:--:,-,:'=--~--:-:-:---: L.sjES O(Sj, a, 0) L.siES T(Si' a, Sj)b(Si) The size of an exact belief state is equal to the number of states in the model. For large problems, maintaining and manipulating an exact belief state can be problematic even if the the transition model has a compact representation [4]. For example, suppose the state space is described via a set of random variables X = {Xl, ... ,Xn }, where each Xi takes on values in some finite domain Val(Xi ), a particular S defines a value Xi E VaJ(Xi) for each variable Xi. The full belief state representation will be exponential in n. We use the approximation method analyzed by Boyen and Koller [4], where the variables are partitioned into a set of disjoint clusters C I ... Ck and belief functions, bl ... bk are maintained over the variables in each cluster. At each time step, we compute the exact belief state, then compute the individual belief functions by marginalizing out inter-cluster correlations. For some assignment, Ci, to variables in Ci, we obtain bi(Ci) = L.ygCl P(Ci' y). An approximation of the original, full belief state is then reconstructed as b( s) = n~=l bi (Ci). By representing the belief state as a product of marginal probabilities, we are projecting the belief state into a reduced space. While a full belief state representation for n state variables would be exponential in n, the size of decomposed belief state representation is exponential in the size of the largest cluster and additive in the number of clusters. For processes that mix rapidly enough, the errors introduced by approximation will stay bounded over time [4]. As discussed by Boyen and Koller [4], this type of decomposed belief state is particularly suitable for processes that can themselves be factored and represented as a dynamic Bayesian network [3]. In such cases we can avoid ever representing an exponentially sized belief state. However, the approach is fully general, and can be applied in any setting where the state is defined as an assignment of values to some set of state variables. 2.2 Value functions and policies for POMDPs If one thinks of a POMOP as an MOP defined over belief states, then the well-known fixed point equations for MOPs still hold. Specifically, V*(b) = m~x [L b(s)R(s, a) + 'Y L P(ola, b)V*(bl )] sES oED where'Y is the discount factor and b' (defined above) is the next belief state. The optimal policy is determined by the maximizing action for each belief state. In principle, we could use Q-Iearning or value iteration directly to solve POMOPs. The main difficulty lies in the fact that there are uncountably many belief states, making a tabular representation of the value function impossible. Exact methods for POMOPs use the fact that finite horizon value functions are piecewiselinear and convex [11], ensuring a finite representation. While finite, this representation can grow exponentially with the horizon, making exact approaches impractical in most settings. Function approximation is an attractive alternative to exact methods. We implement function approximation using a set of parameterized Q-functions, where Qa(b, W a) is the reward-to-go for taking action a in belief state b. A value function is reconstructed from the Q-functions as V(b) = maxa(Qa(b, W a)), and the update rule for Wa when a transition Reinforcement Learning Using Approximate Belief States 1039 from state b to b' under action a with reward R is: 2.3 Function approximation architectures We consider two types of function approximators. The first is a two-layer feedforward neural network with sigmoidal internal units and a linear outermost layer. We used one network for each Q function. For full belief state reinforcement learning, we used networks with lSI inputs (one for each component of the belief state) and v'fSf hidden nodes. For approximate belief state reinforcement learning, we used networks with one input for each assignment to the variables in each cluster. If we had two clusters, for example, each with 3 binary variables, then our Q networks would each have 23 + 23 = 16 inputs. We kept the number of hidden nodes for each network as the square root of the number of inputs. Our second function approximator is SPOVA [10], which is a soft max function designed to exploit the piecewise-linear structure of POMDP value functions. A SPOVA Q function maintains a set of weight vectors Wal ... W ai, and is evaluated as: In practice, a small value of k (usually 1.2) is adopted at the start of learning, making the function very smooth. This is increased during learning until SPOVA closely approximates a PWLC function of b (usually k = 8). We maintained one SPOVA Q function for each action and assigned JiST vectors to each function. This gave O(IAIISI JiST) parameters to both SPOVA and the full belief state neural network. 3 Empirical Results We present results on several problems from the POMDP literature and present an extension to a known machine repair problem that is designed to highlight the effects of approximate belief states. Our results are presented in the form of performance graphs, where the value of the current policy is obtained by taking a snapshot of the value function and measuring the discounted sum of reward obtained by the resulting policy in simulation. We use "NN" to refer to the neural network trained reinforcement learner trained with the full belief state and the term "Decomposed NN" to refer to the neural network trained with an approximate belief which is decomposed as a product of marginals. We used a simple exploration strategy, starting with a 0.1 probability of acting randomly, which decreased linearly to 0.01. Due to space limitations, we are not able to describe each model in detail. However, we used publicly available model description files from [5].1 Table 3.4 shows the running times of the different methods. These are generally much lower than what would be required to solve these problems using exact methods. 3.1 Grid Worlds We begin by considering two grid worlds, a 4 x 3 world from [10] and a 60-state world from [7]. The 4 x 3 world contains only 11 states and does not have a natural decomposition into state variables, so we compared SPOVA only with the full belief state neural network. I See hup:/Iwww.cs.brown.edu/research/ai/pomdp/index.html. Note that this file format specifies a starting distribution for each problem and our results are reported with respect to this starting distribution. 1040 " ! ~ 0 5 / o f· / .0.5 J ·1 50'---"""OOOO""""""20000-'--"""".L---"-""""'"c:--ISOOOO :::-"--60000-'--=' OOOO.L--",,, 1IOOOO '"c:--90000 ,,-.--' ,OOOOO -. A. Rodriguez, R. Parr and D. Koller SPOVANN ------Oecompo.-l NN ______ / 01 o. Figure 1: a) 3 x 4 Grid World, b) 60-state maze The experimental results, which are averaged over 25 training runs and 100 simulations per policy snapshot, are presented in Figure 1a. They show that SPOVA learns faster than the neural network, but that the network does eventually catch up. The 60-state robot navigation problem [7] was amenable to a decomposed belief state approximation since its underlying state space comes from the product of 15 robot positions and 4 robot orientations. We decomposed the belief state with two clusters, one containing a position state variable and the other containing an orientation state variable. Figure 1 b shows results in which SPOVA again dominates. The decomposed NN has trouble with this problem because the effects of position and orientation on the value function are not easily decoupled, i.e., the effect of orientation on value is highly state-dependent. This meant that the decomposed NN was forced to learn a much more complicated function of its inputs than the function learned by the network using the full belief state. 3.2 Aircraft Identification Aircraft identification is another problem studied in Cassandra's thesis. It includes sensing actions for identifying incoming aircraft and actions for attacking threatening aircraft. Attacks against friendly aircraft are penalized, as are failures to intercept hostile aircraft. This is a challenging problem because there is tension in deciding between the various sensors. " Better sensors tend to make the base more visible to hostile aircraft, while more stealthy sensors are less accurate. The sensors give information about both the aircraft's type and distance from the base. The state space of this problem is comprised of three main components. aircraft type eitherthe aircraft is a friend orit is a foe; distance -how far the aircraft is currently from the base discretized into an adjustable number, d, of distinct distances; vis ibi 1 i ty a measure of how visible the base is to the approaching aircraft, which is discretized into 5 levels. We chose d = 10, gaving this problem 104 states. The problem has a natural decomposition into state variables for aircraft type, distance and base visibility. The results for the three algorithms are shown in Figure 2(a). This is the first problem where we start to see an advantage from decomposing the belief state. For the decomposed NN, we used three separate clusters, one for each variable, which meant that the network had only 17 inputs. Not only did the simpler network learn faster, but it learned a better policy overall. We believe that this illustrates an important point: even though SPOVA and the full belief state neural network may be more expressive than the decomposed NN, the decomposed NN is able to search the space of functions it can represent much more efficiently due to the reduced number of parameters. Reinforcement Learning Using Approximate Belief States 1041 so SPOVA, NN ----- DKompo!ll8d NN . ·20 O'--'-'~ OOOOO "-:--:-200000 '-'--"""""' OOO:---:-: """"" """""'SOOOOO ~-:eooooo -:":-:-:--:: 700000 """""""' 800000 ":::::-900000 """""'--' ''''' 1*IIIIonI ·20 SPOVA NN O.:.ompo!ll8d NN o 10000 2IXlOO 30000 40000 50000 60000 70000 aoooo 90000 l00c00 lteratbns Figure 2.: a) Aircraft Identification, b) Machine Maintenance 3.3 Machine Maintenance Our last problem was the machine maintenance problem from Cassandra's database. The problem assumes that there is a machine with a certain number of components. The quality of the parts produced by the machine is determined by the condition of the components. Each component can be in one of four conditions: good the component is in good condition; fair the component has some amount of wear, and would benefit from some maintenance; bad the part is very worn and could use repairs; broken the part is broken and must be replaced. The status of the components is observable only if the machine is completely disassembled. Figure 2(b) shows performance results for this problem for the 4 component version of this problem. At 256 states, it was at the maximum size for which a full belief state approach was manageable. However, the belief state for this problem decomposes naturally into clusters describing the status of each machine, creating a decomposed belief state with just four components. The graph shows the dominance of this this simple decomposition approach. We believe that this problem clearly demonstrates the advantage of belief state decomposition: The decomposed NN learns a function of 16 inputs in fraction of the time it takes for the full net or SPOVA to learn a lower-quality function of 256 inputs. 3.4 Running Times The table below shows the running times for the different problems presented above. These are generally much less than what would be required to solve these problems exactly. The full NN and SPOVA are roughly comparable, but the decomposed neural network is considerably faster. We did not exploit any problem structure in our approximate belief state computation, so the time spent computing belief states is actually larger for the decomposed NN. The savings comes from the the reduction in the number of parameters used, which reduced the number of partial derivatives computed. We expect the savings to be significantly more substantial for processes represented in a factored way [3], as the approximate belief state propagation algorithm can also take advantage of this additional structure. 4 Concluding Remarks We have a proposed a new approach to belief state reinforcement learning through the use of approximate belief states. Using well-known examples from the POMDP literature, we have compared approximate belief state reinforcement learning with two other methods 1042 Problem 3x4 Hallway Aircraft ID MachineM. SPOVA 19.1 s 32.8 min 38.3 min 2.5 h NN 13.0s 47.1 min 49.9 min 2.6 h A. Rodriguez, R. Parr and D. Koller Decomposed NN 3.2 min 4.4 min 4.7 min Table 1: Run times (in seconds, minutes or hours) for the different algorithms that use exact belief states. Our results demonstrate that, while approximate belief states may not be ideal for tightly coupled problem features, such as the position and orientation of a robot, they are a natural and effective means of addressing some large problems. Even for the medium-sized problems we showed here, approximate belief state reinforcement learning can outperform full belief state reinforcement learning using fewer trials and much less CPU time. For many problems, exact belief state methods will simply be impractical and approximate belief states will provide a tractable alternative. Acknowledgements This work was supported by the ARO under the MURI program "Integrated Approach to Intelligent Systems," by ONR contract N66001-97-C-8554 under DARPA's HPKB program, and by the generosity of the Powell Foundation and the Sloan Foundation. References [1] K. J. Astrom. Optimal control of Markov decision processes with incomplete state estimation. l. Math. Anal. Applic., 10:174-205,1965. [2] R.E. Bellman. Dynamic Programming. Princeton University Press, 1957. [3] C. Boutilier, T. Dean, and S. Hanks. Decision theoretic planning: Structural assumptions and computational leverage. Journal of Artijiciallntelligence Research, 1999. [4] X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In Proc. UAI, 1998. [5] A. Cassandra. Exact and approximate Algorithms for partially observable Markov Decision Problems. PhD thesis, Computer Science Dept., Brown Univ., 1998. [6] M. Littman. Algorithms for Sequential Decision Making. PhD thesis, Computer Science Dept., Brown Univ., 1996. [7] M. Littman, A. Cassandra, and L.P. Kaelbling. Learning policies for partially observable environments: Scaling up. In Proc. ICML, pages 362-370, 1996. [8] J. Loch and S. Singh. Using eligibility traces to find the best memory less policy in partially observable markov decision processes. In Proc. ICML. Morgan Kaufmann, 1998. [9] Andrew R. McCallum. Overcoming incomplete perception with utile distinction memory. In Proc.ICML, pages 190-196, 1993. [10] Ronald Parr and Stuart Russell. Approximating optimal policies for partially observable stochastic domains. In Proc. IlCAI, 1995. [11] R. D. Smallwood and E. J. Sondik. The optimal control of partially observable Markov processes over a finite horizon. Operations Research, 21: 1071-1088,1973. [12] M. Wiering and J. Schmidhuber. HQ-leaming: Discovering Markovian subgoals for non-Markovian reinforcement learning. Technical report, Istituo Daile Molle di Studi sull'Intelligenza Artificiale, 1996.
|
1999
|
9
|
1,742
|
Boosting Algorithms as Gradient Descent Llew Mason Research School of Information Sciences and Engineering Australian National University Canberra, ACT, 0200, Australia lmason@syseng.anu.edu.au Peter Bartlett Research School of Information Sciences and Engineering Australian National University Canberra, ACT, 0200, Australia Peter.Bartlett@anu.edu.au Jonathan Baxter Research School of Information Sciences and Engineering Australian National University Canberra, ACT, 0200, Australia Jonathan. Baxter@anu.edu.au Marcus Frean Department of Computer Science and Electrical Engineering The University of Queensland Brisbane, QLD, 4072, Australia marcusf@elec.uq.edu.au Abstract We provide an abstract characterization of boosting algorithms as gradient decsent on cost-functionals in an inner-product function space. We prove convergence of these functional-gradient-descent algorithms under quite weak conditions. Following previous theoretical results bounding the generalization performance of convex combinations of classifiers in terms of general cost functions of the margin, we present a new algorithm (DOOM II) for performing a gradient descent optimization of such cost functions. Experiments on several data sets from the UC Irvine repository demonstrate that DOOM II generally outperforms AdaBoost, especially in high noise situations, and that the overfitting behaviour of AdaBoost is predicted by our cost functions. 1 Introduction There has been considerable interest recently in voting methods for pattern classification, which predict the label of a particular example using a weighted vote over a set of base classifiers [10, 2, 6, 9, 16, 5, 3, 19, 12, 17, 7, 11, 8]. Recent theoretical results suggest that the effectiveness of these algorithms is due to their tendency to produce large margin classifiers [1, 18]. Loosely speaking, if a combination of classifiers correctly classifies most of the training data with a large margin, then its error probability is small. In [14] we gave improved upper bounds on the misclassification probability of a combined classifier in terms of the average over the training data of a certain cost function of the margins. That paper also described DOOM, an algorithm for directly minimizing the margin cost function by adjusting the weights associated with Boosting Algorithms as Gradient Descent 513 each base classifier (the base classifiers are suppiled to DOOM). DOOM exhibits performance improvements over AdaBoost, even when using the same base hypotheses, which provides additional empirical evidence that these margin cost functions are appropriate quantities to optimize. In this paper, we present a general class of algorithms (called AnyBoost) which are gradient descent algorithms for choosing linear combinations of elements of an inner product function space so as to minimize some cost functional. The normal operation of a weak learner is shown to be equivalent to maximizing a certain inner product. We prove convergence of AnyBoost under weak conditions. In Section 3, we show that this general class of algorithms includes as special cases nearly all existing voting methods. In Section 5, we present experimental results for a special case of AnyBoost that minimizes a theoretically-motivated margin cost functional. The experiments show that the new algorithm typically outperforms AdaBoost, and that this is especially true with label noise. In addition, the theoretically-motivated cost functions provide good estimates of the error of AdaBoost, in the sense that they can be used to predict its overfitting behaviour. 2 AnyBoost Let (x, y) denote examples from X x Y, where X is the space of measurements (typically X ~ JRN) and Y is the space of labels (Y is usually a discrete set or some subset of JR). Let F denote some class of functions (the base hypotheses) mapping X -7 Y, and lin (F) denote the set of all linear combinations of functions in F. Let (,) be an inner product on lin (F), and C: lin (F) -7 ~ a cost functional on lin (F). Our aim is to find a function F E lin (F) minimizing C(F). We will proceed iteratively via a gradient descent procedure. Suppose we have some F E lin (F) and we wish to find a new f E F to add to F so that the cost C(F + Ef) decreases, for some small value of E. Viewed in function space terms, we are asking for the "direction" f such that C(F + Ef) most rapidly decreases. The desired direction is simply the negative of the functional derivative ofC at F, -\lC(F), where: \lC(F)(x) := aC(F + o:Ix) I ' (1) ao: 0:=0 where Ix is the indicator function of x. Since we are restricted to choosing our new function f from F, in general it will not be possible to choose f = -\lC(F), so instead we search for an f with greatest inner product with -\lC(F). That is, we should choose f to maximize - (\lC(F), I). This can be motivated by observing that, to first order in E, C(F + Ef) = C(F) + E (\lC(F), f) and hence the greatest reduction in cost will occur for the f maximizing - (\lC(F), f). For reasons that will become obvious later, an algorithm that chooses f attempting to maximize - (\lC(F), f) will be described as a weak learner. The preceding discussion motivates Algorithm 1 (AnyBoost), an iterative algorithm for finding linear combinations F of base hypotheses in F that minimize the cost functional C (F). Note that we have allowed the base hypotheses to take values in an arbitrary set Y, we have not restricted the form of the cost or the inner product, and we have not specified what the step-sizes should be. Appropriate choices for 514 L. Mason, J Baxter. P. Bartlett and M Frean these things will be made when we apply the algorithm to more concrete situations. Note also that the algorithm terminates when - (\lC(Ft), It+!) ~ 0, i.e when the weak learner C returns a base hypothesis It+l which no longer points in the downhill direction of the cost function C(F). Thus, the algorithm terminates when, to first order, a step in function space in the direction of the base hypothesis returned by C would increase the cost. Algorithm 1 : Any Boost Require: • An inner product space (X, (, )) containing functions mapping from X to some set Y. • A class of base classifiers F ~ X. • A differentiable cost functional C: lin (F) --+ III • A weak learner C(F) that accepts F E lin (F) and returns I E F with a large value of - (\lC(F), f). Let Fo(x) := O. for t := 0 to T do Let It+! := C(Ft ). if - (\lC(Ft ), It+!) ~ 0 then return Ft. end if Choose Wt+!. Let Ft+l := Ft + Wt+I!t+1 end for return FT+I. 3 A gradient descent view of voting methods We now restrict our attention to base hypotheses I E F mapping to Y = {± I}, and the inner product (2) for all F, G E lin (F), where S = {Xl, yt), . . . , (Xn, Yn)} is a set of training examples generated according to some unknown distribution 1) on X x Y. Our aim now is to find F E lin (F) such that Pr(x,y)"""Vsgn (F(x)) -=f. Y is minimal, where sgn (F(x)) = -1 if F (x) < 0 and sgn (F (x)) = 1 otherwise. In other words, sgn F should minimize the misclassification probability. The margin of F : X --+ R on example (x,y) is defined as yF(x). Consider margin cost-Iunctionals defined by 1 m C(F) := L C(YiF(Xi)) m i=l where c: R --+ R is any differentiable real-valued function of the margin. With these definitions, a quick calculation shows: 1 m - (\lC(F), I) = -2 LYd(Xi)C'(YiF(Xi)). m i=l Since positive margins correspond to examples correctly labelled by sgn F and negative margins to incorrectly labelled examples, any sensible cost function of the Boosting Algorithms as Gradient Descent 515 Table 1: Existing voting methods viewed as AnyBoost on margin cost functions. Algorithm Cost function Step size AdaBoost [9] e-yF(x) Line search ARC-X4 [2] (1 - yF(x))" 1ft ConfidenceBoost [19] e yF(x) Line search LogitBoost [12] In(l + e-yl«X») Newton-Raphson margin will be monotonically decreasing. Hence -C'(YiF(Xi)) will always be positive. Dividing through by - 2:::1 C'(YiF(Xi)), we see that finding an I maximizing - ('\1 C (F), f) is equivalent to finding an I minimizing the weighted error L D(i) where for i = 1, ... ,m. i: f(Xi):f;Yi Many of the most successful voting methods are, for the appropriate choice of margin cost function c and step-size, specific cases of the AnyBoost algorithm (see Table 3). A more detailed analysis can be found in the full version of this paper [15]. 4 Convergence of Any Boost In this section we provide convergence results for the AnyBoost algorithm, under quite weak conditions on the cost functional C. The prescriptions given for the step-sizes Wt in these results are for convergence guarantees only: in practice they will almost always be smaller than necessary, hence fixed small steps or some form of line search should be used. The following theorem (proof omitted, see [15]) supplies a specific step-size for AnyBoost and characterizes the limiting behaviour with this step-size. Theorem 1. Let C: lin (F) -7 ~ be any lower bounded, Lipschitz differentiable cost functional (that is, there exists L > 0 such that II'\1C(F)-'\1C(F')1I :::; LIIF-F'II lor all F, F' E lin (F)). Let Fo, F l , ... be the sequence 01 combined hypotheses generated by the AnyBoost algorithm, using step-sizes ('\1C(Ft), It+!) Wt+1 := Lll/t+!112 . (3) Then AnyBoost either halts on round T with - ('\1C(FT), IT+1) :::; 0, or C(Ft) converges to some finite value C*, in which case limt-+oo ('\1C(Ft), It+l) = O. The next theorem (proof omitted, see [15]) shows that if the weak learner can always find the best weak hypothesis It E F on each round of AnyBoost, and if the cost functional C is convex, then any accumulation point F of the sequence (Ft) generated by AnyBoost with the step sizes (3) is a global minimum of the cost. For ease of exposition, we have assumed that rather than terminating when - ('\1C(FT), h+l) :::; 0, AnyBoost simply continues to return FT for all subsequent time steps t. Theorem 2. Let C: lin (F) -7 ~ be a convex cost functional with the properties in Theorem 1, and let (Ft) be the sequence 01 combined hypotheses generated by the AnyBoost algorithm with step sizes given by (3). Assume that the weak hypothesis class F is negation closed (f E F ===} - I E F) and that on each round 516 L. Mason, 1. Baxter, P. Bartlett and M. Frean the AnyBoost algorithm finds a function fHl maximizing - (V'C(Ft ), ft+l)· Then any accumulation point F of the sequence (Ft) satisfies sUP/EF - (V'C(F), f) = 0, and C(F) = infGElin(F) C(G). 5 Experiments AdaBoost had been perceived to be resistant to overfitting despite the fact that it can produce combinations involving very large numbers of classifiers. However, recent studies have shown that this is not the case, even for base classifiers as simple as decision stumps [13, 5, 17]. This overfitting can be attributed to the use of exponential margin cost functions (recall Table 3). The results in in [14] showed that overfitting may be avoided by using margin cost functionals of a form qualitatively similar to 1 m C(F) = - 2: 1 - tanh(>'YiF(xi)), m i=l (4) where >. is an adjustable parameter controlling the steepness of the margin cost function c(z) = 1 - tanh(>.z). For the theoretical analysis of [14] to apply, F must be a convex combination of base hypotheses, rather than a general linear combination. Henceforth (4) will be referred to as the normalized sigmoid cost functional. AnyBoost with (4) as the cost functional and (2) as the inner product will be referred to as DOOM II. In our implementation of DOOM II we use a fixed small step-size € (for all of the experiments € = 0.05). For all details of the algorithm the reader is referred to the full version of this paper [15]. We compared the performance of DOOM II and AdaBoost on a selection of nine data sets taken from the VCI machine learning repository [4] to which various levels of label noise had been applied. To simplify matters, only binary classification problems were considered. For all of the experiments axis orthogonal hyperplanes (also known as decision stumps) were used as the weak learner. Full details of the experimental setup may be found in [15]. A summary of the experimental results is shown in Figure 1. The improvement in test error exhibited by DOOM II over AdaBoost is shown for each data set and noise level. DOOM II generally outperforms AdaBoost and the improvement is more pronounced in the presence of label noise. The effect of using the normalized sigmoid cost function rather than the exponential cost function is best illustrated by comparing the cumulative margin distributions generated by AdaBoost and DOOM II. Figure 2 shows comparisons for two data sets with 0% and 15% label noise applied. For a given margin, the value on the curve corresponds to the proportion of training examples with margin less than or equal to this value. These curves show that in trying to increase the margins of negative examples AdaBoost is willing to sacrifice the margin of positive examples significantly. In contrast, DOOM II 'gives up' on examples with large negative margin in order to reduce the value of the cost function. Given that AdaBoost does suffer from overfitting and is guaranteed to minimize an exponential cost function of the margins, this cost function certainly does not relate to test error. How does the value of our proposed cost function correlate against AdaBoost's test error? Figure 3 shows the variation in the normalized sigmoid cost function, the exponential cost function and the test error for AdaBoost for two VCI data sets over 10000 rounds. There is a strong correlation between the normalized sigmoid cost and AdaBoost's test error. In both data sets the minimum Boosting Algorithms as Gradient Descent 517 3.5 3 2.5 I 2 ! .11 ;! ~ : 1.5 1 ~ II) bO , 0 11 ~ fl ; 0 = 1 i I -r I os ! Q '" > 0.5 ! -0 ! f os 0; g 0 : 0 .. i ~ t , 0 ~ -0.5 i -1 ! 00/0 noise " ! , ~,. ;~ Il'.'ioc· ... ~ .. ,. # ,.~ -1.5 J 15(fi.: noi~e -2 SOIlar cleve Ionosphere vote I credll brea.l;t-cancer Jmna .. uldlans hypo I sphCt! Data set Figure 1: Summary oft est error advantage (with standard error bars) of DOOM II over AdaBoost with varying levels of noise on nine VCI data sets. breast--cancer~wisconsin -O~ noise .. AdaBI')(\'it -- U,,· Mise - DOOM II 0.8 15% noise - AdaBoost ............. 15% noise - DOOM n 0.6 0.4 0.2 o~------~~~~~~----~ -1 -0.5 o Mar~in 0.5 0.8 0.6 0.4 0.2 splice -- 0% n(!ise .. AdaBo(lst -- 0% ,,,)i.,~ -DOOM II 15% noise - AdaBoost . ............ 15% noise - DOOM n. o~------~~~~------~-----J -1 -0.5 o Mar~ in 0.5 Figure 2: Margin distributions for AdaBoost and DOOM II with 0% and 15% label noise for the breast-cancer and splice data sets. of AdaBoost's test error and the mlllimum of the normalized sigmoid cost very nearly coincide, showing that the sigmoid cost function predicts when AdaBoost will start to overfit. References [1] P. L. Bartlett. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Transactions on Information Theory, 44(2) :525-536, March 1998. [2] L. Breiman. Bagging predictors. Machine Learning, 24(2):123- 140, 1996. [3] L. Breiman. Prediction games and arcing algorithms. Technical Report 504, Department of Statistics, University of California, Berkeley, 1998. [4] E. Keogh C. Blake and C. J. Merz. UCI repository of machine learning databases, 1998. http:j jwww.ics.uci.eduj"'mlearnjMLRepository_html. [5] T.G. Dietterich. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Technical report, Computer Science Department, Oregon State University, 1998. 518 labor 30.-~--~------~~--~------~ AdaB(>o~1 1e_~1 (';1.)( -Exponential CQ!;! -Normalized sigmoid cost ............ . !O 100 1000 10000 Rounds L. Mason, J. Baxter, P Bartlett and M. Frean 7 /1 I 6 \ \ 5 ........ '\ \ yotel AJaB ()(~3 l re;.;( CIYor -Exponential cost ---.. . Normalized sigmoid enst ........... .. \ 1/' 4 \ ~_~ ~ \..i\.. 'V'\..,..... 2 , ............................... ,"" ....................... . O~----~------~-----=~----~ 1 10 100 1000 10000 Rounds Figure 3: AdaBoost test error, exponential cost and normalized sigmoid cost over 10000 rounds of AdaBoost for the labor and vote1 data sets. Both costs have been scaled in each case for easier comparison with test error. [6] H. Drucker and C. Cortes. Boosting decision trees. In Advances in Neural Information Processing Systems 8, pages 479- 485, 1996. [7] N. Duffy and D. Helmbold. A geometric approach to leveraging weak learners. In Computational Learning Theory: 4th European Conference, 1999. (to appear). [8] Y. Freund. An adaptive version of the boost by majority algorithm. In Proceedings of the Twelfth Annual Conference on Computational Learning Theory, 1999. (to appear) . [9] Y. Freund and R. E. Schapire. Experiments with a new boosting algorithm. In Machine Learning: Proceedings of the Thirteenth International Conference, pages 148-156, 1996. [10] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119139, August 1997. [11] J . Friedman. Greedy function approximation : A gradient boosting machine. Technical report, Stanford University, 1999. [12] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression : A statistical view of boosting. Technical report, Stanford University, 1998. [13] A. Grove and D. Schuurmans. Boosting in the limit: Maximizing the margin of learned ensembles. In Proceedings of the Fifteenth National Conference on Artificial Intelligence, pages 692-699, 1998. [14] L. Mason, P. 1. Bartlett, and J . Baxter. Improved generalization through explicit optimization of margins. Machine Learning, 1999. (to appear) . [15] Llew Mason, Jonathan Baxter, Peter Bartlett, and Marcus Frean. Functional Gradient Techniques for Combining Hypotheses. In Alex Smola, Peter Bartlett, Bernard Sch6lkopf, and Dale Schurmanns, editors, Large Margin Classifiers. MIT Press, 1999. To appear. [16] J. R. Quinlan. Bagging, boosting, and C4.5. In Proceedings of the Thirteenth National Conference on Artificial Intelligence, pages 725-730, 1996. [17] G. Ratsch, T. Onoda, and K.-R. Muller. Soft margins for AdaBoost. Technical Report NC-TR-1998-021, Department of Computer Science, Royal Holloway, University of London, Egham, UK, 1998. [18] R. E. Schapire, Y. Freund, P. L. Bartlett, and W . S. Lee. Boosting the margin : A new explanation for the effectiveness of voting methods. Annals of Statistics, 26(5):1651- 1686, October 1998. [19] R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, pages 80- 91, 1998.
|
1999
|
90
|
1,743
|
Local probability propagation for factor analysis Brendan J. Frey Computer Science, University of Waterloo, Waterloo, Ontario, Canada Abstract Ever since Pearl's probability propagation algorithm in graphs with cycles was shown to produce excellent results for error-correcting decoding a few years ago, we have been curious about whether local probability propagation could be used successfully for machine learning. One of the simplest adaptive models is the factor analyzer, which is a two-layer network that models bottom layer sensory inputs as a linear combination of top layer factors plus independent Gaussian sensor noise. We show that local probability propagation in the factor analyzer network usually takes just a few iterations to perform accurate inference, even in networks with 320 sensors and 80 factors. We derive an expression for the algorithm's fixed point and show that this fixed point matches the exact solution in a variety of networks, even when the fixed point is unstable. We also show that this method can be used successfully to perform inference for approximate EM and we give results on an online face recognition task. 1 Factor analysis A simple way to encode input patterns is to suppose that each input can be wellapproximated by a linear combination of component vectors, where the amplitudes of the vectors are modulated to match the input. For a given training set, the most appropriate set of component vectors will depend on how we expect the modulation levels to behave and how we measure the distance between the input and its approximation. These effects can be captured by a generative probabilit~ model that specifies a distribution p(z) over modulation levels z = (Zl, ... ,ZK) and a distribution p(xlz) over sensors x = (Xl, ... ,XN)T given the modulation levels. Principal component analysis, independent component analysis and factor analysis can be viewed as maximum likelihood learning in a model of this type, where we assume that over the training set, the appropriate modulation levels are independent and the overall distortion is given by the sum of the individual sensor distortions. In factor analysis, the modulation levels are called factors and the distributions have the following form: p(Zk) = N(Zk; 0,1), p(xnlz) = N(xn; E~=l AnkZk, 'l/Jn), p(z) = nf=lP(Zk) = N(z; 0, I), p(xlz) = n:=IP(xnlz) = N(x; Az, 'It). (1) The parameters of this model are the factor loading matrix A, with elements Ank, and the diagonal sensor noise covariance matrix 'It, with diagonal elements 'l/Jn. A belief network for the factor analyzer is shown in Fig. 1a. The likelihood is p(x) = 1 N(z; 0, I)N(x; Az, 'It)dz = N(x; 0, AA T + 'It), (2) Local Probability Propagation for Factor Analysis 443 (b) ...., - -... - 'J 1t. . ... , " E 'r ... '" "' I: ~ ~. '. Figure 1: (a) A belief network for factor analysis. (b) High-dimensional data (N = 560). and online factor analysis consists of adapting A and q, to increase the likelihood of the current input, such as a vector of pixels from an image in Fig. lb. Probabilistic inference - computing or estimating p{zlx) - is needed to do dimensionality reduction and to fill in the unobserved factors for online EM-type learning. In this paper, we focus on methods that infer independent factors. p(zlx) is Gaussian and it turns out that the posterior means and variances of the factors are E[zlx] = (A Tq,-l A + 1)-1 AT q,-lx, diag(COV(zlx)) = diag(A T q,-l A + 1)-1). (3) Given A and q" computing these values exactly takes O(K2 N) computations, mainly because of the time needed to compute AT q,-l A. Since there are only K N connections in the network, exact inference takes at least O{K) bottom-up/top down iterations. Of course, if the same network is going to be applied more than K times for inference (e.g., for batch EM), then the matrices in (3) can be computed once and reused. However, this is not directly applicable in online learning and in biological models. One way to circumvent computing the matrices is to keep a separate recognition network, which approximates E[zlx] with Rx (Dayan et al., 1995). The optimal recognition network, R = (A Tq,-l A+I)-l A Tq,-l, can be approximated by jointly estimating the generative network and the recognition network using online wakesleep learning (Hinton et al., 1995). 2 Probability propagation in the factor analyzer network Recent results on error-correcting coding show that in some cases Pearl's probability propagation algorithm, which does exact probabilistic inference in graphs that are trees, gives excellent performance even if the network contains so many cycles that its minimal cut set is exponential (Frey and MacKay, 1998; Frey, 1998; MacKay, 1999). In fact, the probability propagation algorithm for decoding lowdensity parity-check codes (MacKay, 1999) and turbocodes (Berrou and Glavieux, 1996) is widely considered to be a major breakthrough in the information theory community. When the network contains cycles, the local computations give rise to an iterative algorithm, which hopefully converges to a good answer. Little is known about the convergence properties of the algorithm. Networks containing a single cycle have been successfully analyzed by Weiss (1999) and Smyth et al. (1997), but results for networks containing many cycles are much less revealing. The probability messages produced by probability propagation in the factor analyzer network of Fig. 1a are Gaussians. Each iteration of propagation consists of passing a mean and a variance along each edge in a bottom-up pass, followed by passing a mean and a variance along each edge in a top-down pass. At any instant, the 444 B.J. Frey bottom-up means and variances can be combined to form estimates of the means and variances of the modulation levels given the input. Initially, the variance and mean sent from the kth top layer unit to the nth sensor is set to vk~ = 1 and 7]i~ = 0. The bottom-up pass begins by computing a noise level and an error signal at each sensor using the top-down variances and means from the previous iteration: s~) = 'l/Jn + 2:{:=1 A;kVk~-I) , e~) = Xn - 2: {:= 1 Ank7]i~-l). (4) These are used to compute bottom-up variances and means as follows: ",(i) = s(i)/A2 _ v(i-l) lI(i) = e(i)/A k + 7](i-l) 'l'nk n nk kn' r'nk n n kn' (5) The bottom-up variances and means are then combined to form the current estimates of the modulation variances and means: (i) N (i) A(i) _ (i)"",N (i)/",(i) Vk = 1/(1 + 2:n=1 1/¢nk)' Zk Vk L..Jn=lJ.tnk 'l'nk' (6) The top-down pass proceeds by computing top-down variances and means as follows: vk~ = l/(l/vii ) l/¢~l), 7]i~ = vk~(.iki) /vii ) J.t~V¢~l)· (7) Notice that the variance updates are independent of the mean updates, whereas the mean updates depend on the variance updates. 2.1 Performance of local probability propagation. We created a total of 200,000 factor analysis networks with 20 different sizes ranging from K = 5, N = 10 to K = 80, N = 320 and for each size of network we measured the inference error as a function of the number of iterations of propagation. Each of the 10,000 networks of a given size was produced by drawing the AnkS from standard normal distributions and then drawing each sensor variance 'l/Jn from an exponential distribution with mean 2:{:=1 A;k' (A similar procedure was used by Neal and Dayan (1997).) For each random network, a pattern was simulated from the network and probability propagation was applied using the simulated pattern as input. We measured the error between the estimate z(i) and the correct value E[zlx] by computing the difference between their coding costs under the exact posterior distribution and then normalizing by K to get an average number of nats per top layer unit. Fig. 2a shows the inference error on a logarithmic scale versus the number of iterations (maximum of 20) in the 20 different network sizes. In all cases, the median error is reduced below .01 nats within 6 iterations. The rate of convergence of the error improves for larger N, as indicated by a general trend for the error curves to drop when N is increased. In contrast, the rate of convergence of the error appears to worsen for larger K, as shown by a general slight trend for the error curves to rise when K is increased. For K ~ N/8, 0.1% of the networks actually diverge. To better understand the divergent cases, we studied the means and variances for all of the divergent networks. In all cases, the variances converge within a few iterations whereas the means oscillate and diverge. For K = 5, N = 10, 54 of the 10,000 networks diverged and 5 of these are shown in Fig. 2b. This observation suggests that in general the dynamics are determined by the dynamics of the mean updates. 2.2 Fixed points and a condition for global convergence. When the variance updates converge, the dynamics of probability propagation in factor analysis networks become linear. This allows us to derive the fixed point of propagation in closed form and write an eigenvalue condition for global convergence. Local Probability Propagation for Factor Analysis (a) K = 5 ~',:~ ~ 01~ g"Xl~ ... 10 II ~ 1:~ 11' 0~ ~ .01 ~ ',:u 2: 0, ~ ',:~ ~ 0' o ,OO~ ~' O~ ~ 01 0 10 20 K = 10 K=20 K=40 445 K=80 Figure 2: (a) Performance of probability propagation. Median inference error (bold curve) on a logarithmic scale as a function of the number of iterations for different sizes of network parameterized by K and N. The two curves adjacent to the bold curve show the range within which 98% of the errors lie. 99.9% of the errors were below the fourth, topmost curve. (b) The error, bottom-up variances and top-down means as a function of the number of iterations (maximum of 20) for 5 divergent networks of size K = 5, N = 10. To analyze the system of mean updates, we define the following length K N vecf d h · . - (i) _ ( (i) (i) (i) (i ) (i))T - (i) _ tors 0 means an t e mput. TJ 1711,1721"'" 17Kl' 1712' " . , 17KN , P, ( (i) (i) (i ) (i) (i) )T ( )T J-tll,J-t12 ' ''' ,J-tlK , J-t21'''' , J-tNK , X= Xl,Xl, .. · ,Xl,X2, .. · , X2 , XN, .. · ,XN , where each Xn is repeated K times in the last vector. The network parameters are represented using K N x K N diagonal matrices, A and q,. The diagonal of A is A11, ... , AIK , A21, ... , ANK, and the diagonal of q, is '1/111, '1/121, ... , '1/INI, where 1 is the K x K identity matrix. The converged bottom-up variances are represented using a diagonal matrix ~ with diagonal ¢11, ... , ¢IK , ¢21, .. . , ¢NK. The summation operations in the propagation formulas are represented by a K N x K N matrix I: z that sums over means sent down from the top layer and a K N x K N matrix I:x that sums over means sent up from the sensory input: ) :Ex = (~ 1 ' i 1 1 1 1 (8) These are N x N matrices of K x K blocks, where 1 is the K x K block of ones and 1 is the K x K identity matrix. Using the above representations, the bottom-up pass is given by ji,(i) = A-I X _ A- I (:E z - I)Af7(i-l), (9) and the top-down pass is given by f7(i) = (I + diag(:Ex~ -1 :Ex) _ ~ -1) -1 (I:x _ I)~ -1 ji,(i ) . (10) Substituting (10) into (9), we get the linear update for ji,: ji,(i) = A-I X _ A-I (:E z _ I)A(I + diag(:Exci -l:Ex) _ c) -1) -1 (:Ex _ I)ci -1 ji,(i-l). (11) 446 B.J. Frey B[]Bga~Q~g[] 1.24 1.07 1.49 1.13 1.03 1.02 1.09 1.01 1.11 1.06 Figure 3: The error (log scale) versus number of iterations (log scale. max. of 1000) in 10 of the divergent networks with K = 5. N = 10. The means were initialized to the fixed point solutions and machine round-off errors cause divergence from the fixed points. whose errors are shown by horizontal lines. The fixed point of this dynamic system, when it exists, is ji,* = ~ (A~ + (tz - I)A(I + diag(I:xc) -ltx) - ~ -1) -\tx - I)) -1 x. (12) A fixed point exists if the determinant of the expression in large braces in (12) is nonzero. We have found a simplified expression for this determinant in terms of the determinants of smaller, K x K matrices. Reinterpreting the dynamics in (11) as dynamics for Aji,(i), the stability of a fixed point is determined by the largest eigenvalue of the update matrix, (I:z - I)A (I + - -1 - -1 -1 - -1 - -1 diag(Exc}) Ex)-c}) ) (Ex-I)c}) A . If the modulus ofthe largest eigenvalue is less than 1, the fixed point is stable. Since the system is linear, if a stable fixed point exists, the system will be globally convergent to this point. Of the 200,000 networks we explored, about 99.9% of the networks converged. For 10 of the divergent networks with K = 5, N = 10, we used 1000 iterations of probability propagation to compute the steady state variances. Then, we computed the modulus of the largest eigenvalue of the system and we computed the fixed point. After initializing the bottom-up means to the fixed point values, we performed 1000 iterations to see if numerical errors due to machine precision would cause divergence from the fixed point. Fig. 3 shows the error versus number of iterations (on logarithmic scales) for each network, the error of the fixed point, and the modulus of the largest eigenvalue. In some cases, the network diverges from the fixed point and reaches a dynamic equilibrium that has a lower average error than the fixed point. 3 Online factor analysis To perform maximum likelihood factor analysis in an online fashion, each parameter should be modified to slightly increase the log-probability of the current sensory input,logp(x). However, since the factors are hidden, they must be probabilistically "filled in" using inference before an incremental learning step is performed. If the estimated mean and variance of the kth factor are Zk and Vk, then it turns out (e.g., Neal and Dayan, 1997) the parameters can be updated as follows: Ank f- Ank + l}[Zk(Xn - Ef=1 AnjZj) - VkAnk]/'ljln, 'IjIn f- (l-l})'ljln + l}[(xn - Ef=1 AnjZj)2 + Ef=1 VkA~j], (13) where 1} is a learning rate. Online learning consists of performing some number of iterations of probability propagation for the current input (e.g., 4 iterations) and then modifying the parameters before processing the next input. 3.1 Results on simulated data. We produced 95 training sets of 200 cases each, with input sizes ranging from 20 sensors to 320 sensors. For each of 19 sizes of factor analyzer, we randomly selected 5 sets of parameters as described above and generated a training set. The factor analyzer sizes were K E {5, 10,20,40, 80}, Local Probability Propagation for Factor Analysis 447 Figure 4: (a) Achievable errors after the same number of epochs of learning using 4 iterations versus 1 iteration. The horizontal axis gives the log-probability error (log scale) for learning with 1 iteration and the vertical axis gives the error after the same number of epochs for learning with 4 iterations. (b) The achievable errors for learning using 4 iterations of propagation versus wake-sleep learning using 4 iterations. N E {20, 40, 80,160, 320}, N > K. For each factor analyzer and simulated data set, we estimated the optimal log-probability of the data using 100 iterations of EM. For learning, the size of the model to be trained was set equal to the size of the model that was used to generate the data. To avoid the issue of how to schedule learning rates, we searched for achievable learning curves, regardless of whether or not a simple schedule for the learning rate exists. So, for a given method and randomly initialized parameters, we performed one separate epoch of learning using each of the learning rates, 1,0.5, ... ,0.520 and picked the learning rate that most improved the log-probability. Each successive learning rate was determined by comparing the performance using the old learning rate and one 0.75 times smaller. We are mainly interested in comparing the achievable curves for different methods and how the differences scale with K and N. For two methods with the same K and N trained on the same data, we plot the log-probability error (optimal logprobability minus log-probability under the learned model) of one method against the log-probability error of the other method. Fig. 4a shows the achievable errors using 4 iterations versus using 1 iteration. Usually, using 4 iterations produces networks with lower errors than those learned using 1 iteration. The difference is most significant for networks with large K, where in Sec. 2.1 we found that the convergence of the inference error was slower. Fig. 4b shows the achievable errors for learning using 4 iterations of probability propagation versus wake-sleep learning using 4 iterations. Generally, probability propagation achieves much smaller errors than wake-sleep learning, although for small K wake-sleep performs better very close to the optimum log-probability. The most significant difference between the methods occurs for large K, where aside from local optima probability propagation achieves nearly optimal log-probabilities while the log-probabilities for wake-sleep learning are still close to their values at the start of learning. 4 Online face recognition Fig. 1b shows examples from a set of 30,000 20 x 28 greyscale face images of 18 different people. In contrast to other data sets used to test face recognition methods, these faces include wide variation in expression and pose. To make classification more difficult, we normalized the images for each person so that each pixel has 448 B.J. Frey the same mean and variance. We used probability propagation and a recognition network in a factor analyzer to reduce the dimensionality of the data online from 560 dimensions to 40 dimensions. For probability propagation, we rather arbitrarily chose a learning rate of 0.0001, but for wake-sleep learning we tried learning rates ranging from 0.1 down to 0.0001. A multilayer perceptron with one hidden layer of 160 tanh units and one output layer of 18 softmax units was simultaneously being trained using gradient descent to predict face identity from the mean factors. The learning rate for the multilayer perceptron was set to 0.05 and this value was used for both methods. For each image, a prediction was made before the parameters were modified. Fig. 5 shows online error curves obtained by filtering the losses. The curve for probability propagation is generally below the curves for wake-sleep learning. The figure also shows the error curves for two forms of online nearest neighbors, where only the most recent W cases are used to make a prediction. The form of nearest neighbors that performs the worst has W set so that the storage requirements are the same as for the factor analysis / multilayer perceptron method. The better form of nearest neighbors has W set so that the number of computations is the same as for the factor analysis / multilayer perceptron method. 5 Summary ~ j "' " \ ' i .. ',"--""~ \<',::,::'--, ... , ... "" '. ~, '. Number of pattern presentations Figure 5: Online error curves for probability propagation (solid), wake-sleep learning (dashed), nearest neighbors (dot-dashed) and guessing (dotted). It turns out that iterative probability propagation can be fruitful when used for learning in a graphical model with cycles, even when the model is densely connected. Although we are more interested in extending this work to more complex models where exact inference takes exponential time, studying iterative probability propagation in the factor analyzer allowed us to compare our results with exact inference and allowed us to derive the fixed point of the algorithm. We are currently applying iterative propagation in multiple cause networks for vision problems. References C. Berrou and A. Glavieux 1996. Near optimum error correcting coding and decoding: Turbo-codes. IEEE TI-ans. on Communications, 44, 1261-1271. P. Dayan, G. E. Hinton, R. M. Neal and R. S. Zemel 1995. The Helmholtz machine. Neural Computation 1, 889-904. B. J. Frey and D. J. C. MacKay 1998. A revolution: Belief propagation in graphs with cycles. In M. Jordan, M. Kearns and S. Solla (eds), Advances in Neural Information Processing Systems 10, Denver, 1997. B. J. Frey 1998. Graphical Models for Machine Learning and Digital Communication. MIT Press, Cambridge MA. See http://wvv.cs.utoronto.ca/-frey . G. E. Hinton, P. Dayan, B. J. Frey and R. M. Neal 1995. The wake-sleep algorithm for unsupervised neural networks. Science 268, 1158-1161. D. J. C. MacKay 1999. Information Theory, Inference and Learning Algorithms. Book in preparation, currently available at http://wol.ra.phy.cam.ac . uk/mackay. R. M. Neal and P. Dayan 1997. Factor analysis using delta-rule wake-sleep learning. Neural Computation 9, 1781-1804. P. Smyth, R. J . McEliece, M. Xu, S. Aji and G. Horn 1997. Probability propagation in graphs with cycles. Presented at the workshop on Inference and Learning in Graphical Models, Vail, Colorado. Y. Weiss 1998. Correctness of local probability propagation in graphical models. To appear in Neural Computation.
|
1999
|
91
|
1,744
|
A MCMC approach to Hierarchical Mixture Modelling Christopher K. I. Williams Institute for Adaptive and Neural Computation Division of Informatics, University of Edinburgh 5 Forrest Hill, Edinburgh EHI 2QL, Scotland, UK ckiw@dai.ed.ac.uk http://anc.ed.ac.uk Abstract There are many hierarchical clustering algorithms available, but these lack a firm statistical basis. Here we set up a hierarchical probabilistic mixture model, where data is generated in a hierarchical tree-structured manner. Markov chain Monte Carlo (MCMC) methods are demonstrated which can be used to sample from the posterior distribution over trees containing variable numbers of hidden units. 1 Introduction Over the past decade or two mixture models have become a popular approach to clustering or competitive learning problems. They have the advantage of having a well-defined objective function and fit in with the general trend of viewing neural network problems in a statistical framework. However, one disadvantage is that they produce a "flat" cluster structure rather than the hierarchical tree structure that is returned by some clustering algorithms such as the agglomerative single-link method (see e.g. [12]). In this paper I formulate a hierarchical mixture model, which retains the advantages of the statistical framework, but also features a tree-structured hierarchy. The basic idea is illustrated in Figure 1 (a). At the root of the tree (level l) we have a single centre (marked with a x). This is the mean of a Gaussian with large variance (represented by the large circle). A random number of centres (in this case 3) are sampled from the level 1 Gaussian, to produce 3 new centres (marked with o's). The variance associated with the level 2 Gaussians is smaller. A number of level 3 units are produced and associated with the level 2 Gaussians. The centre of each level 3 unit (marked with a +) is sampled from its parent Gaussian. This hierarchical process could be continued indefinitely, but in this example we generate data from the level 3 Gaussians, as shown by the dots in Figure lea). A three-level version of this model would be a standard mixture model with a Gaussian prior on where the centres are located. In the four-level model the third level centres are clumped together around the second level means, and it is this that distinguishes the model from a flat mixture model. Another view of the generative process is given in Figure l(b), where the tree structure denotes which nodes are children of particular parents. Note also that this is a directed acyclic graph, with the arrows denoting dependence of the position of the child on that of the parent. A MCMC Approach to Hierarchical Mixture Modelling 681 In section 2 we describe the theory of probabilistic hierarchical clustering and give a discussion of related work. Experimental results are described in section 3. (a) (b) Figure 1: The basic idea of the hierarchical mixture model. (a) x denotes the root of the tree, the second level centres are denoted by o's and the third level centres by +'s. Data is generated from the third level centres by sampling random points from Gaussians whose means are the third level centres. (b) The corresponding tree structure. 2 Theory We describe in turn (i) the prior over trees, (ii) the calculation of the likelihood given a data vector, (iii) Markov chain Monte Carlo (MCMC) methods for the inference of the tree structure given data and (iv) related work. 2.1 Prior over trees We describe first the prior over the number of units in each layer, and then the prior on connections between layers. Consider a L layer hierarchical model. The root node is in levell, there are n2 nodes in level 2, and so on down to nL nodes on level L. These n's are collected together in the vector n. We use a Markovian model for P(n), so that P(n) = P(ndP(n2In1) ... P(nLlnL-1) with P(n1) = 8(n1,I). Currently these are taken to be Poisson distributions offset by 1, so that P(ni+llni) rv PO(Aini) + 1, where Ai is a parameter associated with level i. The offset is used so that there must always be at least one unit in any layer. Given n, we next consider how the tree is formed. The tree structure describes which node in the ith layer is the parent of each node in the (i + 1 )th layer, for i = 1, ... , L - 1. Each unit has an indicator vector which stores the index of the parent to which it is attached. We collect all these indicator vectors together into a matrix, denoted Z(n). The probability of a node in layer (i + 1) connecting to any node in layer i is taken to be I/ni. Thus L-1 P(n, Z(n)) = P(n)P(Z(n)ln) = P(n) IT (I/ni)n i +1 • i=l We now describe the generation of a random tree given nand Z(n). For simplicity we describe the generation of points in I-d below, although everything can be extended to arbitrary dimension very easily. The mean f-Ll of the level 1 Gaussian is at the origin 1. The I It is easy to relax this assumption so that /-L 1 has a prior Gaussian distribution, or is located at some point other than the origin. 682 C. K. l. Williams level 2 means It], j = 1, ... ,n2 are generated from N (It 1 , af), where ar is the variance associated with the level ] node. Similarly, the position of each level 3 node is generated from its level 2 parent as a displacement from the position of the level 2 parent. This displacement is a Gaussian RV with zero mean and variance ai. This process continues on down to the visible variables. In order for this model to be useful, we require that ar > ai > ... > aI-I' i.e. that the variability introduced at successive levels declines monotonically (cj scaling of wavelet coefficients). 2.2 Calculation of the likelihood The data that we observe are the positions of the points in the final layer; this is denoted x. To calculate the likelihood of x under this model, we need to integrate out the locations of the means of the hidden variables in levels 2 through to L - 1. This can be done explicitly, however, we can shorten this calculation by realizing that given Z(n), the generative distribution for the observables x is Gaussian N(O, C). The covariance matrix C can be calculated as follows. Consider two leaf nodes indexed by k and i. The Gaussian RVs that generated the position of these two leaves can be denoted l _ 1 2 (L-l) 1 2 (L-l) Xk Wk + Wk + ... + W k , XI = WI + WI + ... + WI . To calculate the covariance between Xk and Xl, we simply calculate (XkXI). This depends crucially on how many of the w's are shared between nodes k and l (cj path analysis). For example, if Wk i- wl, i.e. the nodes lie in different branches of the tree at levell, their covariance is zero. If k = l, the variance is just the sum of the variances of each RV in the tree. In between, the covariance of Xk and XI can be determined by finding at what level in the tree their common parent occurs. Under these assumptions, the log likelihood L of x given Z (n) is 1 T -1 1 nL L=-"2x C x-"2logICI-Tlog21r. (1) In fact this calculation can be speeded up by taking account of the tree structure (see e.g. [8]). Note also that the posterior means (and variances) of the hidden variables can be calculated based on the covariances between the hidden and visible nodes. Again, this calculation can be carried out more efficiently; see Pearl [11] (section 7.2) for details. 2.3 Inference for nand Z (n) Given n we have the problem of trying to infer the connectivity structure Z given the observations x. Of course what we are interested in is the posterior distribution over Z, i.e. P(Zlx, n). One approach is to use a Markov chain Monte Carlo (MCMC) method to sample from this posterior distribution. A straightforward way to do this is to use the Metropolis algorithm, where we propose changes in the structure by changing the parent of a single node at a time. Note the similarities of this algorithm to the work of Williams and Adams [14] on Dynamic Trees COTs); the main differences are Ci) that disconnections are not allowed, i.e. we maintain a single tree (rather than a forest), and (ii) that the variables in the DT image models are discrete rather than Gaussian. We also need to consider moves that change n. This can be effected with a split/merge move. In the split direction, consider a node with a parent and several children. Split this node and randomly assign the children to the two split nodes. Each of the split nodes keeps the same parent. The probability of accepting this move under the Metropolis-Hastings scheme is . (1 P(n', Z(n')lx)Q(n', Z(n'); n, Z(n))) a = mm 'P(n, Z(n)lx)Q(n, Z(n); n', Z(n')) , A MCMC Approach to Hierarchical Mixture Modelling 683 where Q(n', Z(n'); n, Z(n)) is the proposal probability of configuration (n', Z(n')) given configuration (n, Z (n)). This scheme is based on the work on MCMC model composition (MC3 ) by Madigan and York [9], and on Green's work on reversible jump MCMC [5]. Another move that changes n is to remove "dangling" nodes, i.e. nodes which have no children. This occurs when all the nodes in a given layer "decide" not to use one or more nodes in the layer above. An alternative to sampling from the posterior is to use approximate inference, such as mean-field methods. These are currently being investigated for DT models [1]. 2.4 Related work There are a very large number of papers on hierarchical clustering; in this work we have focussed on expressing hierarchical clustering in terms of probabilistic models. For example Ambros-Ingerson et at [2] and Mozer [10] developed models where the idea is to cluster data at a coarse level, subtract out mean and cluster the residuals (recursively). This paper can be seen as a probabilistic interpretation of this idea. The reconstruction of phylogenetic trees from biological sequence (DNA or protein) information gives rise to the problem of inferring a binary tree from the data. Durbin et al [3] (chapter 8) show how a probabilistic formulation of the problem can be developed, and the link to agglomerative hierarchical clustering algorithms as approximations to the full probabilistic method (see §8.6 in [3]). Much of the biological sequence work uses discrete variables, which diverges somewhat from the focus of the current work. However work by Edwards (1970) [4] concerns a branching Brownian-motion process, which has some similarities to the model described above. Important differences are that Edwards' model is in continuous time, and the the variances of the particles are derived from a Wiener process (and so have variance proportional to the lifetime of the particle). This is in contrast to the decreasing sequence of variances at a given number of levels assumed in the above model. One important difference between the model discussed in this paper and the phylogenetic tree model is that points in higher levels of the phylogenetic tree are taken to be individuals at an earlier time in evolutionary history, which is not the interpretation we require here. An very different notion of hierarchy in mixture models can be found in the work on the AutoClass system [6]. They describe a model involving class hierarchy and inheritance, but their trees specify over which dimensions sharing of parameters occurs (e.g. means and covariance matrices for Gaussians). In contrast, the model in this paper creates a hierarchy over examples labelled 1, ... ,n rather than dimensions. Xu and Pearl [15] discuss the inference of a tree-structured belief network based on knowledge of the covariances of the leaf nodes. This algorithm cannot be applied directly in our case as the covariances are not known, although we note that if multiple runs from a given tree structure were available the covariances might be approximated using sample estimates. Other ideas concerning hierarchical clustering are discussed in [13] and [7]. 3 Experiments We describe two sets of experiments to explore these ideas. 3.1 Searching over Z with n fixed 100 4-level random trees were generated from the prior, using values of >'1 = 1.5, >'2 = 2, >'3 = 3, and (JI = 10, (Ji = 1, (J~ = 0.01. These trees had between 4 and 79 leaf 684 C. K I. Williams nodes, with an average of 30. For each tree n was kept the same as in the generative tree, and sampling was carried out over Z starting from a random initial configuration. A given node proposes changing its parent, and this proposal is accepted or rejected with the usual Metropolis probability. In one sweep, each node in levels 3 and 4 makes such a move. (Level 2 nodes only have one parent so there is no point in such a move there.) To obtain a representative sample of P(Z(n)ln, x), we should run the chain for as long as possible. However, we can also use the chain to find configurations with high posterior probability, and in this case running for longer only increases the chances of finding a better configuration. In our experiments the sampler was run for 100 sweeps. As P(Z(n)ln) is uniform for fixed n, the posterior is simply proportional to the likelihood term. It would also be possible to run simulated annealing with the same move set to search explicitly for the maximum a posteriori (MAP) configuration. The results are that for 76 of the 100 cases the tree with the highest posterior probability (HPP) configuration had higher posterior probability than the generative tree, for 20 cases the same tree was found and in 4 cases the HPP solution was inferior to the generative tree. The fact that in almost all cases the sampler found a configuration as good or better than the generative one in a relatively small number of sweeps is very encouraging. In Figure 2 the generative (left column) and HPP trees for fixed n (middle column) are plotted for two examples. In panel (b) note the "dangling" node in level 2, which means that the level 3 nodes to the left end up in a inferior configuration to (a). By contrast, in panel (e) the sampler has found a better (less tangled) configuration than the generative model (d). _'1150841 (a) (b) (c) (d) (e) (f) Figure 2: (a) and (d) show the generative trees for two examples. The corresponding HPP trees for fixed n are plotted in (b) and (e) and those for variable n in (c) and (f). The number in each panel is the log posterior probability of the configuration. The nodes in levels 2 and 3 are shown located at their posterior means. Apparent non-tree structures are caused by two nodes being plotted almost on top of each other. 3.2 Searching over both nand Z Given some data x we will not usually know appropriate numbers of hidden units. This motivates searching over both Z and n, which can be achieved using the split/merge moves discussed in section 2.3. In the experiments below the initial numbers of units in levels 2 and 3 (denoted n2 and A MCMC Approach to Hierarchical Mixture Modelling 685 113) were set using the simple-minded formulae 113 = rdim(x)/A31112 = r113/A21. A proper inferential calculation for 71,2 and 71,3 can be carried out, but it requires the solution of a non-linear optimization problem. Given 112 and 113, the initial connection configuration was chosen randomly. The search method used was to propose a split/merge move (with probability 0.5:0.5) in level 2, then to sample the level 2 to level 3 connections, and then to propose a split-merge move in level 3, and then update the level 3 to level 4 connections. This comprised a single sweep, and as above 100 sweeps were used. Experiments were conducted on the same trees used in section 3.1. In this case the results were that for 50 out of the 100 cases, the HPP configuration had higher posterior probability than the generative tree, for 11 cases the same tree was found and in 39 cases the HPP solution was inferior to the generative tree. Overall these results are less good than the ones in section 3.1, but it should be remembered that the search space is now much larger, and so it would be expected that one would need to search longer. Comparing the results from fixed n against those with variable n shows that in 42 out of 100 cases the variable n method gave a higher posterior probability. in 45 cases it was lower and in 13 cases the same trees were found. The rightmost column of Figure 2 shows the HPP configurations when sampling with variable n on the two examples discussed above. In panel (c) the solution found is not very dissimilar to that in panel (b), although the overall probability is lower. In Cf), the solution found uses just one level 2 centre rather than two, and obtains a higher posterior probability than the configurations in (e) and Cd). 4 Discussion The results above indicate that the proposed model behaves sensibly, and that reasonable solutions can be found with relatively short amounts of search. The method has been demonstrated on univariate data, but extending it to multivariate Gaussian data for which each dimension is independent given the tree structure is very easy as the likelihood calculation is independent on each dimension. There are many other directions is which the model can be developed. Firstly, the model as presented has uniform mixing proportions, so that children are equally likely to connect to each potential parent. This can be generalized so that there is a non-uniform vector of connection probabilities in each layer. Also, given a tree structure and independent Dirichlet priors over these probability vectors, these parameters can be integrated out analytically. Secondly, the model can be made to generate iid data by regarding the penultimate layer as mixture centres; in this case the term P(nLI71,L-l) would be ignored when computing the probability of the tree. Thirdly, it would be possible to add the variance variables to the MCMC scheme, e.g. using the Metropolis algorithm, after defining a suitable prior on the sequence of variances ai, ... ,ai-I' The constraint that all variances in the same level are equal could also be relaxed by allowing them to depend on hyperparameters set at every level. Fourthly, there may be improved MCMC schemes that can be devised. For example, in the current implementation the posterior means of the candidate units are not taken into account when proposing merge moves Ccj [5]). Fifthly, for the multivariate Gaussian version we can consider a tree-structured factor analysis model, so that higher levels in the tree need not have the same dimensionality as the data vectors. One can also consider a version where each dimension is a multinomial rather than a continuous variable. In this case one might consider a model where a multinomial parameter vector (}l in the tree is generated from its parent by (}l = 'Y(}l-l + (1- 'Y)r where 'Y E [0,1] and r is a random vector of probabilities. An alternative model could be to build a tree 686 C. K. 1. Williams structured prior on the a parameters of the Dirichlet prior for the multinomial distribution. Acknowledgments This work is partially supported through EPSRC grant GRIL 78161 Probabilistic Models for Sequences. I thank the Gatsby Computational Neuroscience Unit (UCL) for organizing the "Mixtures Day" in March 1999 and supporting my attendance, and Peter Green, Phil Dawid and Peter Dayan for helpful discussions at the meeting. I also thank Amos Storkey for helpful discussions and Magnus Rattray for (accidentally!) pointing me towards the chapters on phylogenetic trees in [3]. References [1] N. J. Adams, A. Storkey, Z. Ghahramani, and C. K. 1. Williams. MFDTs: Mean Field Dynamic Trees. Submitted to ICPR 2000,1999. [2] J. Ambros-Ingerson, R. Granger, and G. Lynch. Simulation of Paleocortex Performs Hierarchical Clustering. Science, 247:1344-1348,1990. [3] R. Durbin, S. Eddy, A. Krogh, and G. Mitchison. Biological Sequence Analysis. Cambridge University Press, Cambridge, UK, 1998. [4] A. w. F. Edwards. Estimation of the Branch Points of a Branching Diffusion Process. Journal of the Royal Statistical Society B, 32(2): 155-174, 1970. [5] P. J. Green. Reversible Jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, 82(4):711-732, 1995. [6] R. Hanson, J. Stutz, and P. Cheeseman. Bayesian Classification with Correlation and Inheritance. In IlCAI-91: Proceedings of the Twelfth International Joint Conference on Artificial Intelligence, 1991. Sydney, Australia. [7] T. Hofmann and J. M. Buhmann. Hierarchical Pairwise Data Clustering by MeanField Annealing. In F. Fogelman-Soulie and P. Gallinari, editors, Proc. ICANN 95. EC2 et Cie, 1995. [8] M. R. Luettgen and A. S. Wi11sky. Likelihood Calculation for a Class of Multiscale Stochastic Models, with Application to Texture Discrimination. IEEE Trans. Image Processing, 4(2): 194-207, 1995. [9] D. Madigan and J. York. Bayesian Graphical Models for Discrete Data. International Statistical Review, 63:215-232,1995. [10] M. C. Mozer. Discovering Discrete Distributed Representations with Iterated Competitive Learning. In R. P. Lippmann, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 3. Morgan Kaufmann, 1991. [11] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo, CA, 1988. [12] B. Ripley. Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge, UK, 1996. [13] N. Vasconcelos and A. Lippmann. Learning Mixture Hierarchies. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors, Advances in Neural Information Processing Systems 11, pages 606-612. MIT Press, 1999. [14] C. K. 1. Williams and N. J. Adams. DTs: Dynamic Trees. In M. J. Kearns, S. A. Solla, and D. A. Cohn, editors, Advances in Neural Information Processing Systems 11. MIT Press, 1999. [15] L. Xu and J. Pearl. Structuring Causal Tree Models with Continuous Variables. In L. N. Kanal, T. S. Levitt, and J. F. Lemmer, editors, Uncertainty in Artificial Intelligence 3. Elsevier, 1989.
|
1999
|
92
|
1,745
|
The Infinite Gaussian Mixture Model Carl Edward Rasmussen Department of Mathematical Modelling Technical University of Denmark Building 321, DK-2800 Kongens Lyngby, Denmark carl@imm.dtu.dk http://bayes.imm.dtu.dk Abstract In a Bayesian mixture model it is not necessary a priori to limit the number of components to be finite. In this paper an infinite Gaussian mixture model is presented which neatly sidesteps the difficult problem of finding the "right" number of mixture components. Inference in the model is done using an efficient parameter-free Markov Chain that relies entirely on Gibbs sampling. 1 Introduction One of the major advantages in the Bayesian methodology is that "overfitting" is avoided; thus the difficult task of adjusting model complexity vanishes. For neural networks, this was demonstrated by Neal [1996] whose work on infinite networks led to the reinvention and popularisation of Gaussian Process models [Williams & Rasmussen, 1996]. In this paper a Markov Chain Monte Carlo (MCMC) implementation of a hierarchical infinite Gaussian mixture model is presented. Perhaps surprisingly, inference in such models is possible using finite amounts of computation. Similar models are known in statistics as Dirichlet Process mixture models and go back to Ferguson [1973] and Antoniak [1974]. Usually, expositions start from the Dirichlet process itself [West et al, 1994]; here we derive the model as the limiting case of the wellknown finite mixtures. Bayesian methods for mixtures with an unknown (finite) number of components have been explored by Richardson & Green [1997], whose methods are not easily extended to multivariate observations. 2 Finite hierarchical mixture The finite Gaussian mixture model with k components may be written as: k P(yl/l-l, ... ,/l-k,Sl,· .. ,Sk, 7rl,· .. , 7rk) = L7rjN(/l-j,sjl), (1) j=l where /l-j are the means, Sj the precisions (inverse variances), 7rj the mixing proportions (which must be positive and sum to one) and N is a (normalised) Gaussian with specified mean and variance. For simplicity, the exposition will initially assume scalar observations, n of which comprise the training data y = {Yl, ... , Yn}. First we will consider these models for a fixed value of k, and later explore the properties in the limit where k -+ 00. The Infinite Gaussian Mixture Model 555 Gibbs sampling is a well known technique for generating samples from complicated multivariate distributions that is often used in Monte Carlo procedures. In its simplest form, Gibbs sampling is used to update each variable in turn from its conditional distribution given all other variables in the system. It can be shown that Gibbs sampling generates samples from the joint distribution, and that the entire distribution is explored as the number of Gibbs sweeps grows large. We introduce stochastic indicator variables, Ci, one for each observation, whose role is to encode which class has generated the observation; the indicators take on values 1 ... k. Indicators are often referred to as "missing data" in a mixture model context. In the following sections the priors on component parameters and hyperparameters will be specified, and the conditional distributions for these, which will be needed for Gibbs sampling, will be derived. In general the form of the priors are chosen to have (hopefully) reasonable modelling properties, with an eye to mathematical convenience (through the use of conjugate priors). 2.1 Component parameters The component means, f.1j, are given Gaussian priors: p(f.1jIA,r) ",N(A,r-1), (2) whose mean, A, and precision, r, are hyperparameters common to all components. The hyperparameters themselves are given vague Normal and Gamma priors: p(A) ",N(f.1Y,Cl;), p(r) "'9(1 ,Cl;2) ocr- 1/ 2 exp(-rCl;/2), (3) where f.1y and 0-; are the mean and variance of the observations 1. The shape parameter of the Gamma prior is set to unity, corresponding to a very broad (vague) distribution. The conditional posterior distributions for the means are obtained by multiplying the likelihood from eq. (1) conditioned on the indicators, by the prior, eq. (2): 1 ilJ = "" Yi, n· ~ J i:Ci=j (4) where the occupation number, nj, is the number of observations belonging to class j, and '[jj is the mean of these observations. For the hyperparameters, eg. (2) plays the role of the likelihood which together with the priors from eq. (4) give conditional posteriors of standard form: - 2 + "k N( f.1YCly r ~j=l f.1j 1 ) p(AIf.11,· .. ,f.1k,r)", -2 ' -2 ' Cl Y + kr Cl y + kr k p(rlf.11' ... ,f.1k,A) "'9(k+ 1, [A(Cl; + L(f.1j _A)2)]-1). + . 1 J= (5) The component precisions, S j, are given Gamma priors: p(Sjlj3,w) '" 9(j3,w-1), (6) whose shape, j3, and mean, w- 1 , are hyperparameters common to all components, with priors of inverse Gamma and Gamma form: (7) 1 Strictly speaking, the priors ought not to depend on the observations. The current procedure is equivalent to nonnalising the observations and using unit priors. A wide variety of reasonable priors will lead to similar results. 556 C. E. Rasmussen The conditional posterior precisions are obtained by multiplying the likelihood from eq. (1) conditioned on the indicators, by the prior, eq. (6): p(sjlc, y, /Lj,,8, w) '" 9(,8 + nj, [,8: n' (w,8 + . L (Yi - /Lj)2)] -1). (8) J t :C;=J For the hyperparameters, eq. (6) plays the role of likelihood which together with the priors from eq. (7) give: The latter density is not of standard form, but it can be shown that p(log(,8) lSI, ... , Sk, w) is log-concave, so we may generate independent samples from the distribution for log(,8) using the Adaptive Rejection Sampling (ARS) technique [Gilks & Wild, 1992], and transform these to get values for ,8. The mixing proportions, 7rj, are given a symmetric Dirichlet (also known as multivariate beta) prior with concentration parameter a/ k: k P(""'" , ".1<» ~ Dirichlet(<>/ k, ... , <>/ k) = r~(;1). }1,,;/.-1, (10) where the mixing proportions must be positive and sum to one. Given the mixing proportions, the prior for the occupation numbers, n j, is multinomial and the joint distribution of the indicators becomes: k P(Cl, ... ,ckl7rl, ... , 7rk) = II 7r;i, j=1 n nj = L 8Kronecker(Ci,j). i=1 (11) Using the standard Dirichlet integral, we may integrate out the mixing proportions and write the prior directly in terms of the indicators: P(Cl, ... ,ckla ) = / P(Cl, . .. ,ckl 7rl, .. ' , 7rk)p(7rl, ... ,7rk)d7rl···d7rk (12) k k = r(a) / II 7rnj+a/k-ld7r ' = r(a) II r(nj + a/k) r(a/k)k j=1 J J r(n + a) j=1 r(a/k) . In order to be able to use Gibbs sampling for the (discrete) indicators, Ci, we need the conditional prior for a single indicator given all the others; this is easily obtained from eq. (12) by keeping all but a single indicator fixed: ( '1 ) n-i,j + a/k p Ci = J C-i, a = , n-l+a (13) where the subscript -i indicates all indexes except i and n-i,j is the number of observations, excluding Yi, that are associated with component j. The posteriors for the indicators are derived in the next section. Lastly, a vague prior of inverse Gamma shape is put on the concentration parameter a: p(a- l ) '" 9(1,1) =} p(a) oc a-3 / 2 exp( - 1/(2a)). (14) The Infinite Gaussian Mixture Model 557 The likelihood for 0: may be derived from eq. (12), which together with the prior from eq. (14) gives: o:k r( 0:) p(nl, ... ,nklo:) = r(n + 0:)' ( I o:k-3/2 exp( - 1/(20:))r(0:) po: k,n) ex r(n + 0:) . ( 15) Notice, that the conditional posterior for 0: depends only on number of observations, n, and the number of components, k, and not on how the observations are distributed among the components. The distribution p(log( 0:) I k, n) is log-concave, so we may efficiently generate independent samples from this distribution using ARS. 3 The infinite limit So far, we have considered k to be a fixed finite quantity. In this section we will explore the limit k -7 00 and make the final derivations regarding the conditional posteriors for the indicators. For all the model variables except the indicators, the conditional posteriors for the infinite limit is obtained by substituting for k the number of classes that have data associated with them, krep , in the equations previously derived for the finite model. For the indicators, letting k -7 00 in eq. (13), the conditional prior reaches the following limits: components where n-i,j > 0: all other components combined: = n-l+o:' 0: n-l+o: (16) This shows that the conditional class prior for components that are associated with other observations is proportional to the number of such observations; the combined prior for all other classes depends only on 0: and n. Notice how the analytical tractability of the integral in eq. (12) is essential, since it allows us to work directly with the (finite number of) indicator variables, rather than the (infinite number of) mixing proportions. We may now combine the likelihood from eq. (1) conditioned on the indicators with the prior from eq. (16) to obtain the conditional posteriors for the indicators: componentsforwhichn_i,j > 0: P(Ci =jlc-i,ltj,Sj,o:) ex (17) P(Ci = jlc-i, o:)p(YiI Itj ,Sj ,c-d ex n-i,j S)1/2 exp ( - Sj (Yi - Itj)2 /2), n-l+o: all other components combined: p(Ci:j:. Ci' for all i :j:. i'lc-i, A, r, (3, W, 0:) ex: p(Ci:j:. Ci' foralli:j:. i'lc-i,O:) J p(Yilltj,sj)p(ltj,sjIA,r,{3,w)dltjdsj . The likelihood for components with observations other than Yi currently associated with them is Gaussian with component parameters Itj and Sj. The likelihood pertaining to the currently unrepresented classes (which have no parameters associated with them) is obtained through integration over the prior distribution for these. Note, that we need not differentiate between the infinitely many unrepresented classes, since their parameter distributions are all identical. Unfortunately, this integral is not analytically tractable; I follow Neal [1998], who suggests to sample from the priors (which are Gaussian and Gamma shaped) in order to generate a Monte Carlo estimate of the probability of "generating a new class". Notice, that this approach effectively generates parameters (by sampling from the prior) for the classes that are unrepresented. Since this Monte Carlo estimate is unbiased, the resulting chain will sample from exactly the desired distribution, no matter how many samples are used to approximate the integral; I have found that using a single sample works fairly well in many applications. In detail, there are three possibilities when computing conditional posterior class probabilities, depending on the number of observations associated with the class: 558 C. E. Rasmussen if n-i,j > 0: there are other observations associated with class j, and the posterior class probability is as given by the top line of eq. (17). if n - i,j = a and Ci = j: observation Yi is currently the only observation associated with class j; this is an peculiar situation, since there are no other observations associated with the class, but the class still has parameters. It turns out that this situation should be handled as an unrepresented class, but rather than sampling for the parameters, one simply uses the class parameters; consult [Neal 1998] for a detailed derivation. unrepresented classes: values for the mixture parameters are picked at random from the prior for these parameters, which is Gaussian for J.Lj and Gamma shaped for Sj. Now that all classes have parameters associated with them, we can easily evaluate their likelihoods (which are Gaussian) and the priors, which take the form n-i,jl(n - 1 + a) for components with observations other than Yi associated with them, and al (n - 1 + a) for the remaining class. When hitherto unrepresented classes are chosen, a new class is introduced in the model; classes are removed when they become empty. 4 Inference; the "spirals" example To illustrate the model, we use the 3 dimensional "spirals" dataset from [Ueda et aI, 1998], containing 800 data point, plotted in figure 1. Five data points are generated from each of 160 isotropic Gaussians, whose means follow a spiral pattern. o 16 18 20 22 24 represented components 4 5 6 7 shape, 13 Figure 1: The 800 cases from the three dimensional spirals data. The crosses represent a single (random) sample from the posterior for the mixture model. The krep = 20 represented classes account for nl (n + a) ~ 99.6% of the mass. The lines indicate 2 std. dev. in the Gaussian mixture components; the thickness of the lines represent the mass of the class. To the right histograms for 100 samples from the posterior for krep , a and f3 are shown. 4.1 Multivariate generalisation The generalisation to multivariate observations is straightforward. The means, J.Lj, and precisions, S j, become vectors and matrices respectively, and their prior (and posterior) The Infinite Gaussian Mixture Model 559 distributions become multivariate Gaussian and Wishart. Similarly, the hyperparameter A becomes a vector (multivariate Gaussian prior) and rand w become matrices with Wishart priors. The (3 parameter stays scalar, with the prior on ((3 - D + 1)-1 being Gamma with mean 1/ D, where D is the dimension of the dataset. All other specifications stay the same. Setting D = 1 recovers the scalar case discussed in detail. 4.2 Inference The mixture model is started with a single component, and a large number of Gibbs sweeps are performed, updating all parameters and hyperparameters in turn by sampling from the conditional distributions derived in the previous sections. In figure 2 the auto-covariance for several quantities is plotted, which reveals a maximum correlation-length of about 270. Then 30000 iterations are performed for modelling purposes (taking 18 minutes of CPU time on a Pentium PC): 3000 steps initially for "burn-in", followed by 27000 to generate 100 roughly independent samples from the posterior (spaced evenly 270 apart). In figure 1, the represented components of one sample from the posterior is visualised with the data. To the right of figure 1 we see that the posterior number of represented classes is very concentrated around 18 - 20, and the concentration parameter takes values around a ::::: 3.5 corresponding to only a/ (n + a) ::::: 0.4% of the mass of the predictive distribution belonging to unrepresented classes. The shape parameter (3 takes values around 5-6, which gives the "effective number of points" contributed from the prior to the covariance matrices of the mixture components. 4.3 The predictive distribution Given a particular state in the Markov Chain, the predictive distribution has two parts: the represented classes (which are Gaussian) and the unrepresented classes. As when updating the indicators, we may chose to approximate the unrepresented classes by a finite mixture of Gaussians, whose parameters are drawn from the prior. The final predictive distribution is an average over the (eg. 100) samples from the posterior. For the spirals data this density has roughly 1900 components for the represented classes plus however many are used to represent the remaining mass. I have not attempted to show this distribution. However, one can imagine a smoothed version of the single sample shown in figure 1, from averaging over models with slightly varying numbers of classes and parameters. The (small) mass from the unrepresented classes spreads diffusely over the entire observation range. -c 1[ -" ", -~ ----;::=::;=::::;;:;:=:::::;-~ Q) --, og ~ 0.8 log (a.) ~ log(~-2) <.>0.6 ~ cO.4 til ·~0 . 2 o " g 0' '. - -'- . ' .--- - .~~, - ," '-' , - .~' :; til 0 200 400 600 iteration lag time 800 1000 ~30.---~----~----~----~---. Q) c 8. E20 8 ... .. . - - - . . ...... ---... --------i -= ~10 .:: (J) = ~ :Q. • 2:? '0 00 '**' 1000 2000 3000 4000 Monte Carlo iteration 5000 Figure 2: The left plot shows the auto-covariance length for various parameters in the Markov Chain, based on 105 iterations. Only the number of represented classes, krep , has a significant correlation; the effective correlation length is approximately 270, computed as the sum of covariance coefficients between lag -1000 and 1000. The right hand plot shows the number of represented classes growing during the initial phase of sampling. The initial 3000 iterations are discarded. 560 C. E. Rasmussen 5 Conclusions The infinite hierarchical Bayesian mixture model has been reviewed and extended into a practical method. It has been shown that good performance (without overfitting) can be achieved on multidimensional data. An efficient and practical MCMC algorithm with no free parameters has been derived and demonstrated on an example. The model is fully automatic, without needing specification of parameters of the (vague) prior. This corroborates the falsity of the common misconception that "the only difference between Bayesian and non-Bayesian methods is the prior, which is arbitrary anyway ... ". Further tests on a variety of problems reveals that the infinite mixture model produces densities whose generalisation is highly competitive with other commonly used methods. Current work is undertaken to explore performance on high dimensional problems, in terms of computational efficiency and generalisation. The infinite mixture model has several advantages over its finite counterpart: 1) in many applications, it may be more appropriate not to limit the number of classes, 2) the number of represented classes is automatically determined, 3) the use of MCMC effectively avoids local minima which plague mixtures trained by optimisation based methods, ego EM [Ueda et aI, 1998] and 4) it is much simpler to handle the infinite limit than to work with finite models with unknown sizes, as in [Richardson & Green, 1997] or traditional approaches based on extensive crossvalidation. The Bayesian infinite mixture model solves simultaneously several long-standing problems with mixture models for density estimation. Acknowledgments Thanks to Radford Neal for helpful comments, and to Naonori Ueda for making the spirals data available. This work is funded by the Danish Research Councils through the Computational Neural Network Center (CONNECT) and the THOR Center for Neuroinformatics. References Antoniak, C. E. (1974). Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems. Annals of Statistics 2, 1152-1174. Ferguson, T. S. (1973). A Bayesian analysis of some nonparametric problems. Annals of Statistics 1, 209-230. Gilks, W. R. and P. Wild (1992). Adaptive rejection sampling for Gibbs sampling. Applied Statistics 41, 337-348. Neal, R. M. (1996). Bayesian Learning for Neural Networks, Lecture Notes in Statistics No. 118, New York: Springer-Verlag. Neal, R. M. (1998). Markov chain sampling methods for Dirichlet process mixture models. Technical Report 4915, Department of Statistics, University of Toronto. http://www.cs.toronto.edu/~radford/mixmc.abstract.html . Richardson, S. and P. Green (1997). On Bayesian analysis of mixtures with an unknown number of components. Journal of the Royal Statistical Society. B 59, 731-792. Ueda, N., R. Nakano, Z. Ghahramani and G. E. Hinton (1998). SMEM Algorithm for Mixture Models, NIPS 11. MIT Press. West, M., P. Muller and M. D. Escobar (1994). Hierarchical priors and mixture models with applications in regression and density estimation. In P. R. Freeman and A. F. M. Smith (editors), Aspects of Uncertainty, pp. 363-386. John Wiley. Williams, C. K. I. and C. E. Rasmussen (1996). Gaussian Processes for Regression, in D. S. Touretzky, M. C. Mozer and M. E. Hasselmo (editors), NIPS 8, MIT Press.
|
1999
|
93
|
1,746
|
Reconstruction of Sequential Data with Probabilistic Models and Continuity Constraints Miguel A. Carreira-Perpifian Dept. of Computer Science, University of Sheffield, UK miguel@dcs.shefac.uk Abstract We consider the problem of reconstructing a temporal discrete sequence of multidimensional real vectors when part of the data is missing, under the assumption that the sequence was generated by a continuous process. A particular case of this problem is multivariate regression, which is very difficult when the underlying mapping is one-to-many. We propose an algorithm based on a joint probability model of the variables of interest, implemented using a nonlinear latent variable model. Each point in the sequence is potentially reconstructed as any of the modes of the conditional distribution of the missing variables given the present variables (computed using an exhaustive mode search in a Gaussian mixture). Mode selection is determined by a dynamic programming search that minimises a geometric measure of the reconstructed sequence, derived from continuity constraints. We illustrate the algorithm with a toy example and apply it to a real-world inverse problem, the acoustic-toarticulatory mapping. The results show that the algorithm outperforms conditional mean imputation and multilayer perceptrons. 1 Definition of the problem Consider a mobile point following a continuous trajectory in a subset of ]RD. Imagine that it is possible to obtain a finite number of measurements of the position of the point. Suppose that these measurements are corrupted by noise and that sometimes part of, or all, the variables are missing. The problem considered here is to reconstruct the sequence from the part of it which is observed. In the particular case where the present variables and the missing ones are the same for every point, the problem is one of multivariate regression. If the pattern of missing variables is more general, the problem is one of missing data reconstruction. Consider the problem of regression. If the present variables uniquely identify the missing ones at every point of the data set, the problem can be adequately solved by a universal function approximator, such as a multilayer perceptron. In a probabilistic framework, the conditional mean of the missing variables given the present ones will minimise the average squared reconstruction error [3]. However, if the underlying mapping is one-to-many, there will be regions in the space for which the present variables do not identify uniquely the missing ones. In this case, the conditional mean mapping will fail, since it will give a compromise value-an average of the correct ones. Inverse problems, where the inverse Probabilistic Sequential Data Reconstruction 415 of a mapping is one-to-many, are of this type. They include the acoustic-to-articulatory mapping in speech [15], where different vocal tract shapes may produce the same acoustic signal, or the robot arm problem [2], where different configurations of the joint angles may place the hand in the same position. In some situations, data reconstruction is a means to some other objective, such as classification or inference. Here, we deal solely with data reconstruction of temporally continuous sequences according to the squared error. Our algorithm does not apply for data sets that either lack continuity (e.g. discrete variables) or have lost it (e.g. due to undersampling or shuffling). We follow a statistical learning approach: we attempt to reconstruct the sequence by learning the mapping from a training set drawn from the probability distribution of the data, rather than by solving a physical model of the system. Our algorithm can be described briefly as follows. First, a joint density model of the data is learned in an unsupervised way from a sample of the datal . Then, pointwise reconstruction is achieved by computing all the modes of the conditional distribution of the missing variables given the present ones at the current point. In principle, any of these modes is potentially a plausible reconstruction. When reconstructing a sequence, we repeat this mode search for every point in the sequence, and then find the combination of modes that minimises a geometric sequence measure, using dynamic programming. The sequence measure is derived from local continuity constraints, e.g. the curve length. The algorithm is detailed in §2 to §4. We illustrate it with a 2D toy problem in §5 and apply it to an acoustic-to-articulatory-like problem in §6. §7 discusses the results and compares the approach with previous work. Our notation is as follows. We represent the observed variables in vector form as t = (tl' ... , t D) E ~D. A data set (possibly a temporal sequence) is represented as {t n } ~=l . Groups of variables are represented by sets of indices I, J E {I, ... , D}, so that if I = {I, 7, 3}, then tI = (tlt7t3). 2 Joint generative modelling using latent variables Our starting point is a joint probability model of the observed variables p( t). From it, we can compute conditional distributions of the form p( t..71 tI) and, by picking representative points, derive a (multivalued) mapping tI ~ t..7. Thus, contrarily to other approaches, e.g. [6], we adopt multiple pointwise imputation. In §4 we show how to obtain a single reconstructed sequence of points. Although density estimation requires more parameters than mapping approximation, it has a fundamental advantage [6]: the density model represents the relation between any variables, which allows to choose any missing/present variable combination. A mapping approximator treats asymmetrically some variables as inputs (present) and the rest as outputs (missing) and can't easily deal with other relations. The existence of functional relationships (even one-to-many) between the observed variables indicates that the data must span a low-dimensional manifold in the data space. This suggests the use of latent variable models for modelling the joint density. However, it is possible to use other kinds of density models. In latent variable modelling the assumption is that the observed high-dimensional data t is generated from an underlying low-dimensional process defined by a small number L of latent variables x = (Xl, ... , xL) [1] . The latent variables are mapped by a fixed I In our examples we only use complete training data (i.e., with no missing data), but it is perfectly possible to estimate a probability model with incomplete training data by using an EM algorithm [6]. 416 M A. Carreira-Perpiful.n transformation into a D-dimensional data space and noise is added there. A particular model is specified by three parametric elements: a prior distribution in latent space p(x), a smooth mapping f from latent space to data space and a noise model in data space p(tlx). Marginalising the joint probability density function p(t, x) over the latent space gives the distribution in data space, p(t). Given an observed sample in data space {t n };;=l' a parameter estimate can be found by maximising the log-likelihood, typically using an EM algorithm. We consider the following latent variable models, both of which allow easy computation of conditional distributions of the form p( tJ It I ): Factor analysis [1], in which the mapping is linear, the prior in latent space is unit Gaussian and the noise model is diagonal Gaussian. The density in data space is then Gaussian with a constrained covariance matrix. We use it as a baseline for comparison with more sophisticated models. The generative topographic mapping (GTM) [4] is a nonlinear latent variable model, where the mapping is a generalised linear model, the prior in latent space is discrete uniform and the noise model is isotropic Gaussian. The density in data space is then a constrained mixture of isotropic Gaussians. In latent variable models that sample the latent space prior distribution (like GTM), the mixture centroids in data space (associated to the latent space samples) are not trainable parameters. We can then improve the density model at a higher computational cost with no generalisation loss by increasing the number of mixture components. Note that the number of components required will depend exponentially on the intrinsic dimensionality of the data (ideally coincident with that of the latent space, L) and not on the observed one, D. 3 Exhaustive mode finding Given a conditional distribution p(tJltI), we consider all its modes as plausible predictions for tJ. This requires an exhaustive mode search in the space of t J . For Gaussian mixtures, we do this by using a maximisation algorithm starting from each centroid2, such as a fixed-point iteration or gradient ascent combined with quadratic optimisation [5]. In the particular case where all variables are missing, rather than performing a mode search, we return as predictions all the component centroids. It is also possible to obtain error bars at each mode by locally approximating the density function by a normal distribution. However, if the dimensionality of tJ is high, the error bars become very wide due to the curse of the dimensionality. An advantage of multiple pointwise imputation is the easy incorporation of extra constraints on the missing variables. Such constraints might include keeping only those modes that lie in an interval dependent on the present variables [8] or discarding low-probability (spurious) modes-which speeds up the reconstruction algorithm and may make it more robust. A faster way to generate representative points of p(tJltI) is simply to draw a fixed number of samples from it-which may also give robustness to poor density models. However, in practice this resulted in a higher reconstruction error. 4 Continuity constraints and dynamic programming (D.P) search Application of the exhaustive mode search to the conditional distribution at every point of the sequence produces one or more candidate reconstructions per point. To select a 2 Actually, given a value of tz, most centroids have negligible posterior probability and can be removed from the mixture with practically no loss of accuracy. Thus, a large number of mixture components may be used without deteriorating excessively the computational efficiency. Probabilistic Sequential Data Reconstrnction 417 N 0 ..., -2 -4 -6 - 6 trajectory factor an. mean dpmode -4 -2 tl Average squared reconstruction error Missing Factor MLP" GTM pattern analysis mean dpmode cmode h 3.8902 0.2046 0.2044 0.2168 0.2168 tl 4.3226 2.5126 2.4224 0.0522 0.0522 tl or t2 4.2020 1.2963 0.1305 0.1305 10% 1.0983 0.3970 0.0253 0.0251 50% 6.2914 4.6530 0.1176 0.0771 90% 21.4942 20.7877 2.2261 0.0643 aThe MLP cannot be applied to varying patterns of missing data. Table 1: Trajectory reconstruction for a 2D problem. The table gives the average squared reconstruction error when t2 is missing (row 1), tl is missing (row 2), exactly one variable per point is missing at random (row 3) or a percentage of the values are missing at random (rows 4-6). The graph shows the reconstructed trajectory when tl is missing: factor analysis (straight, dotted line), mean (thick, dashed), dpmode (superimposed on the trajectory). single reconstructed sequence, we define a local continuity constraint: consecutive points in time should also lie nearby in data space. That is, if 8 is some suitable distance in JR.D, 8 (tn, tn+ 1) should be small. Then we define a global geometric measure ~ for a sequence {tn};;=1 as ~ ({tn};;=I) ~f '2:.::11 8 (tn, tn+t). We take 8 as the Euclidean distance, so ~ becomes simply the length of the sequence (considered as a polygonal line). Finding the sequence of modes with minimal ~ is efficiently achieved by dynamic programming. 5 Results with a toy problem To illustrate the algorithm, we generated a 2D data set from the curve (tl, t2) = (x, x + 3 sin(x)) for x E [-211',211'], with normal isotropic noise (standard deviation 0.2) added. Thus, the mapping tl -+ t2 is one-to-one but the inverse one, t2 -+ tl, is multivalued. One-dimensional factor analysis (6 parameters) and GTM models (21 parameters) were estimated from a 1000-point sample, as well as two 48-hidden-unit multilayer perceptrons (98 parameters), one for each mapping. For GTM we tried several strategies to select points from the conditional distribution: mean (the conditional mean), dpmode (the mode selected by dynamic programming) and cmode (the closest mode to the actual value of the missing variable). The cmode, unknown in practice, is used here to compute a lower bound on the performance of any mode-based strategy. Other strategies, such as picking the global mode, a random mode or using a local (greedy) search instead of dynamic programming, gave worse results than the dpmode. Table 1 shows the results for reconstructing a IOO-point trajectory. The nonlinear nature of the problem causes factor analysis to break down in all cases. For the one-to-one mapping case (t2 missing) all the other methods perform well and recover the original trajectory, with mean attaining the lowest error, as predicted by the theory3. For the one-to-many case (tl missing, see fig.), both the MLP and the mean are unable to track more than one branch of the mapping, but the dpmode still recovers the original mapping. For random missing 3 A combined strategy could retain the optimality of the mean in the one-to-one case and the advantage of the modes in the one-to-many case, by choosing the conditional mean (rather than the mode) when the conditional distribution is unimodal, and all the modes otherwise. 418 M A. Carreira-Perpinan Missing Factor GTM pattern analysis mean dpmode cmode PLP 0.9165 0.6217 0.6250 0.4587 EPG 3.7177 2.3729 2.0613 1.0538 10% 0.2046 0.0947 0.0903 0.0841 50% 1.1285 0.7540 0.6527 0.6023 blocks 0.1950 0.1669 0.1005 0.0925 Table 2: Average squared reconstruction error for an utterance. The last row corresponds to a missing pattern of square blocks totalling 10% of the utterance. patterns4, the dpmode is able to cope well with high amounts of missing data. The consistently low error of the cmode shows that the modes contain important information about the possible options to predict the missing values. The performance of the dpmode, close to that of the cmode even for large amounts of missing data, shows that application of the continuity constraint allows to recover that information. 6 Results with real speech data We report a preliminary experiment using acoustic and e1ectropalatographic (EPG) data5 for the utterance "Put your hat on the hatrack and your coat in the cupboard" (speaker FG) from the ACCOR database [10]. 12th-order perceptual linear prediction coefficients [7] plus the log-energy were computed at 200 Hz from its acoustic waveform. The EPG data consisted of 62-bit frames sampled at 200 Hz, which we consider as 62-dimensional vectors of real numbers. No further preprocessing of the data was carried out. Thus, the resulting sequence consisted of over 600 75-dimensional real vectors. We constructed a training set by picking, in random order, 80% of these vectors. The whole utterance was used for the reconstruction test. We trained two density models: a 9-dimensional factor analysis (825 parameters) and a two-dimensional6 GTM (3676 parameters) with a 20 x 20 grid (resulting in a mixture of 400 isotropic Gaussians in the 75-dimensional data space). Table 2 confirms again that the linear method (factor analysis) fares worst (despite its use of a latent space of dimension L = 9). The dpmode attains almost always a lower error than the conditional mean, with up to a 40% improvement (the larger the higher the amount of missing data). When a shuffled version of the utterance (thus having lost its continuity) was reconstructed, the error of the dpmode was consistently higher than that of the mean, indicating that the application of the continuity constraint was responsible for the error decrease. 7 Discussion Using a joint probability model allows flexible construction of predictive distributions for the missing data: varying patterns of missing data and multiple pointwise imputations are possible, as opposed to standard function approximators. We have shown that the modes of the conditional distribution of the missing variables given the present ones are potentially 4Note that the nature of the missing pattern (missing at random, missing completely at random, etc. [9]) does not matter for reconstruction-although it does for estimation. 5 An EPG datum is the (binary) contact pattern between the tongue and the palate at selected locations in the latter. Note that it is an incomplete articulatory representation of speech. 6 A latent space of 2 dimensions is clearly too low for this data, but the computational complexity of GTM prevents the use of a higher one. Still, its nonlinear character compensates partly for this. Probabilistic Sequential Data Reconstruction 419 plausible reconstructions of the missing values, and that the application of local continuity constraints-when they hold-can help to recover the actually plausible ones. Previous work The key aspects of our approach are the use of a joint density model (learnt in an unsupervised way), the exhaustive mode search, the definition of a geometric trajectory measure derived from continuity constraints and its implementation by dynamic programming. Several of these ideas have been applied earlier in the literature, which we review briefly. The use of the joint density model for prediction is the basis of the statistical technique of multiple imputation [9]. Here, several versions of the complete data set are generated from the appropriate conditional distributions, analysed by standard complete-data methods and the results combined to produce inferences that incorporate missing-data uncertainty. Ghahramani and Jordan [6] also proposed the use of the joint density model to generate a single estimate of the missing variables and applied it to a classification problem. Conditional distributions have been approximated by MLPs rather than by density estimation [16], but this lacks flexibility to varying patterns of missing data and requires an extra model of the input variables distribution (unless assumed uniform). Rohwer and van der Rest [12] introduce a cost function with a description length interpretation whose minimum is approximated by the densest mode of a distribution. A neural network trained with this cost function can learn one branch of a multivariate mapping, but is unable to select other branches which may be correct at a given time. Continuity constraints implemented via dynamic programming have been used for the acoustic-to-articulatory mapping problem [15]. Reasonable results (better than using an MLP to approximate the mapping) can be obtained using a large codebook of acoustic and articulatory vectors. Rahim et al. [11] achieve similar quality with much less computational requirements using an assembly of MLPs, each one trained in a different area of the acoustic-articulatory space, to locally approximate the mapping. However, clustering the space is heuristic (with no guarantee that the mapping is one-to-one in each region) and training the assembly is difficult. It also lacks flexibility to varying missingness patterns. A number of trajectory measures have been used in the robot arm problem literature [2] and minimised by dynamic programming, such as the energy, torque, acceleration, jerk, etc. Temporal modelling It is important to remark that our approach does not attempt to model the temporal evolution of the system. The joint probability model is estimated statically. The temporal aspect of the data appears indirectly and a posteriori through the application of the continuity constraints to select a trajectory? In this respect, our approach differs from that of dynamical systems or from models based in Markovian assumptions, such as hidden Markov models or other trajectory models [13, 14]. However, the fact that the duration or speed of the trajectory plays no role in the algorithm may make it invariant to time warping (e.g. robust to fast/slow speech styles). Choice of density model The fact that the modes are a key aspect of our approach make it sensitive to the density model. With finite mixtures, spurious modes can appear as ripple superimposed on the density function in regions where the mixture components are sparsely distributed and have little interaction. Such modes can lead the DP search to a wrong trajectory. Possible solutions are to improve the density model (perhaps by increasing the number of components, see §2, or by regularisation), to smooth the conditional distribution or to look for bumps (regions of high probability mass) instead of modes. 7However, the method may be derived by assuming a distribution over the whole sequence with a normal, Markovian dependence between adjacent frames. 420 M. A. Carreira-Perpifuin Computational cost The DP search has complexity O(N M2), where M is an average of the number of modes per sequence point and N the number of points in the sequence. In our experiments M is usually small and the DP search is fast even for long sequences. The bottleneck of the reconstruction part of the algorithm is obtaining the modes of the conditional distribution for every point in the sequence when there are many missing variables. Further work We envisage more thorough experiments using data from the Wisconsin X-ray microbeam database and comparing with recurrent MLPs or an MLP committee, which may be more suitable for multi valued mappings. Extensions of our algorithm include different geometric measures (e.g. curvature-based rather than length-based), different strategies for multiple pointwise imputation (e.g. bump searching) or multidimensional constraints (e.g. temporal and spatial). Other practical applications include audiovisual mappings for speech, hippocampal place cell reconstruction and wind vector retrieval from scatterometer data. Acknowledgments We thank Steve Renals for useful conversations and for comments about this paper. References [1] D. J. Bartholomew. Latent Variable Models and Factor Analysis. Charles Griffin & Company Ltd., London, 1987. [2] N. Bernstein. The Coordination and Regulation 0/ Movements. Pergamon, Oxford, 1967. [3] C. M. Bishop. Neural Networks/or Pattern Recognition. Oxford University Press, 1995. [4] C. M. Bishop, M. Svensen, and C. K. I. Williams. GTM: The generative topographic mapping. Neural Computation, 10(1):215-234, Jan. 1998. [5] M. A. Carreira-Perpifian. Mode-finding in Gaussian mixtures. Technical Report CS-99-03, Dept. of Computer Science, University of Sheffield, UK, Mar. 1999. Available online at http://vvv.dcs.shef.ac.uk/-miguel/papers/cs-99-03.html. [6] Z. Ghahramani and M. I. Jordan. Supervised learning from incomplete data via an EM approach. In NIPS 6, pages 120-127,1994. [7] H. Hermansky. Perceptual linear predictive (PLP) analysis of speech. 1. Acoustic Soc. Amer., 87(4):1738-1752, Apr. 1990. [8] L. Josifovski, M. Cooke, P. Green, and A. Vizinho. State based imputation of missing data for robust speech recognition and speech enhancement. In Proc. Eurospeech 99. pages 2837-2840, 1999. [9] R. 1. A. Little and D. B. Rubin. Statistical Analysis with Missing Data. John Wiley & Sons, New York, London, Sydney, 1987. [10] A. Marchal and W. J. Hardcastle. ACCOR: Instrumentation and database for the cross-language study of coarticulation. Language and Speech, 36(2, 3): 137-153, 1993. [11] M. G. Rahim, C. C. Goodyear, W. B. Kleijn, J. Schroeter, and M. M. Sondhi. On the use of neural networks in articulatory speech synthesis. 1. Acoustic Soc. Amer., 93(2): 1109-1121, Feb. 1993. [12] R. Rohwer and J. C. van der Rest. Minimum description length, regularization, and multi modal data. Neural Computation, 8(3):595-609, Apr. 1996. [13] S. Roweis. Constrained hidden Markov models. In NIPS 12 (this volume), 2000. [14] L. K. Saul and M. G. Rahim. Markov processes on curves for automatic speech recognition. In NIPS 11, pages 751-757, 1999. [15] 1. Schroeter and M. M. Sondhi. Techniques for estimating vocal-tract shapes from the speech signal. IEEE Trans. Speech and Audio Process., 2(1): 133-150, Jan. 1994. [16] V. Tresp, R. Neuneier, and S. Ahmad. Efficient methods for dealing with missing data in supervised learning. In NiPS 7, pages 689-696, 1995.
|
1999
|
94
|
1,747
|
Learning the Similarity of Documents: An Information-Geometric Approach to Document Retrieval and Categorization Thomas Hofmann Department of Computer Science Brown University, Providence, RI hofmann@cs.brown.edu, www.cs.brown.edu/people/th Abstract The project pursued in this paper is to develop from first information-geometric principles a general method for learning the similarity between text documents. Each individual document is modeled as a memoryless information source. Based on a latent class decomposition of the term-document matrix, a lowdimensional (curved) multinomial subfamily is learned. From this model a canonical similarity function - known as the Fisher kernel - is derived. Our approach can be applied for unsupervised and supervised learning problems alike. This in particular covers interesting cases where both, labeled and unlabeled data are available. Experiments in automated indexing and text categorization verify the advantages of the proposed method. 1 Introduction The computer-based analysis and organization of large document repositories is one oftoday's great challenges in machine learning, a key problem being the quantitative assessment of document similarities. A reliable similarity measure would provide answers to questions like: How similar are two text documents and which documents match a given query best? In a time, where searching in huge on-line (hyper-)text collections like the World Wide Web becomes more and more popular, the relevance of these and related questions needs not to be further emphasized. The focus of this work is on data-driven methods that learn a similarity function based on a training corpus of text documents without requiring domain-specific knowledge. Since we do not assume that labels for text categories, document classes, or topics, etc. are given at this stage, the former is by definition an unsupervised learning problem. In fact, the general problem of learning object similarities precedes many "classical" unsupervised learning methods like data clustering that already presuppose the availability of a metric or similarity function. In this paper, we develop a framework for learning similarities between text documents from first principles. In doing so, we try to span a bridge from the foundations of statistics in information geometry [13, 1] to real-world applications in information retrieval and text learning, namely ad hoc retrieval and text categorization. Although the developed general methodology is not limited to text documents, we will for sake of concreteness restrict our attention exclusively to this domain. Learning the Similarity of Documents 915 2 Latent Class Decomposition Memoryless Information Sources Assume we have available a set of documents V = {dl , ..• , dN} over some fixed vocabulary of words (or terms) W = {WI, ... , WM}. In an information-theoretic perspective, each document di can be viewed as an information source, i.e. a probability distribution over word sequences. Following common practice in information retrieval, we will focus on the more restricted case where text documents are modeled on the level of single word occurrences. This means that we we adopt the bag-of- words view and treat documents as memoryless information sources. I A. Modeling assumption: Each document is a memoryless information source. This assumption implies that each document can be represented by a multinomial probability distribution P(wjldi), which denotes the (unigram) probability that a generic word occurrence in document di will be Wj. Correspondingly, the data can be reduced to some simple sufficient statistics which are counts n(di , Wj) of how often a word Wj occurred in a document dj • The rectangular N x M matrix with coefficients n(di , Wj) is also called the term-document matrix. Latent Class Analysis Latent class analysis is a decomposition technique for contingency tables (cf. [5, 3] and the references therein) that has been applied to language modeling [15] ("aggregate Markov model") and in information retrieval [7] ("probabilistic latent semantic analysis"). In latent class analysis, an unobserved class variable Zk E Z = {zt, ... , ZK} is associated with each observation, i.e. with each word occurrence (di , Wj). The joint probability distribution over V x W is a mixture model that can be parameterized in two equivalent ways K K P(di, Wj) = 2: P(zk)P(dilzk)P(wjlzk) = P(di) 2: P( WjIZk)P(Zk Idd . (1) k=l k=l The latent class model (1) introduces a conditional independence assumption, namely that di and Wj are independent conditioned on the state of the associated latent variable. Since the cardinality of Zk is typically smaller than the number of documents/words in the collection, Zk acts as a bottleneck variable in predicting words conditioned on the context of a particular document. To give the reader a more intuitive understanding of the latent class decomposition, we have visualized a representative subset of 16 "factors" from a K = 64 latent class model fitted from the Reuters2I578 collection (cf. Section 4) in Figure 1. Intuitively, the learned parameters seem to be very meaningful in that they represent identifiable topics and capture the corresponding vocabulary quite well. By using the latent class decomposition to model a collection of memory less sources, we implicitly assume that the overall collection will help in estimating parameters for individual sources, an assumption which has been validated in our experiments. B. Modeling assumption: Parameters for a collection of memoryless information sources are estimated by latent class decomposition. Parameter Estimation The latent class model has an important geometrical interpretation: the parameters ¢1 == P( Wj IZk) define a low-dimensional subfamily of the multinomial family, S(¢) == {11" E [0; I]M : 1I"j = :Ek 1/;k¢1 for some1/; E [0; I]K, :Ek 1/;k = I}, i.e. all multinomials 11" that can be obtained by convex combinations from the set of "basis" vectors {¢k : 1 :::; k :::; K}. For given ¢-parameters, 1 Extensions to the more general case are possible, but beyond the scope of this paper. 916 T. Hofmann government presIdent banks pct unlon marks gold bllhon tax chairman debt january air currency steel dlrs budget executive brazil february workers dollar plant year cut chief new rise strike german mining surplus spending officer loans rose airlines bundesbank copper deficit cuts vice dlrs 1986 aircraft central tons foreign deficit company bankers december port mark silver current taxes named b .. nk year boeing west metal trade reform board payments fell employees dollars production account billion director billion prices airline dealers ounCeS reserves trading america.n tr .. de oil vs areas food house exchange general japan crude cts weather drug rea.gan futures motors j .. panese energy net area study president stock chrysler ec petroleum loss normal aids administration options gm states prices min good prod uct congress index car united bpd shr crop trea.tment white contracts ford officials barrels qtr damage company secretary market test community barrel revs caused environmental told london cars european exploration profit affected products volcker exchanges motor imports price note people approval reagans Figure 1: 16 selected factors from a 64 factor decomposition ofthe Reuters21578 collection. The displayed terms are the 10 most probable words in the class-conditional distribution P (Wj IZk) for 16 selected states Zk after the exclusion of stop words. each 1/;i , 1/;i == P(zkldi), will define a unique multinomial distribution rri E S(¢). Since S( ¢) defines a submanifold on the multinomial simplex, it corresponds to a curved exponential subfamily. 2 We would like to emphasis that we propose to learn both, the parameters within the family (the 1/;'s or mixing proportions P(Zk Idi )) and the parameters that define the subfamily (the ¢'s or class-conditionals P(WjIZk)). The standard procedure for maximum likelihood estimation in latent variable models is the Expectation Maximization (EM) algorithm. In the E-step one computes posterior probabilities for the latent class variables, P(Zk )P( di IZk )P( Wj IZk) 2:1 P(zI)P(dilzt)P(wjlz/) The M-step formulae can be written compactly as P(Zk) P( di IZk )P( Wj IZk) P(di' Wj) P(diIZk)} N M { din P(WjIZk) ex 2: 2: n(dn, wm)P(zkldn, wm) X djm P(Zk) n=l m=l 1 where 6 denotes the Kronecker delta. (2) (3) Related Models As demonstrated in [7], the latent class model can be viewed as a probabilistic variant of Latent Semantic Analysis [2], a dimension reduction technique based on Singular Value Decomposition. It is also closely related to the non-negative matrix decomposition discussed in [12] which uses a Poisson sampling model and has been motivated by imposing non-negativity constraints on a decomposition by PCA. The relationship of the latent class model to clustering models like distributional clustering [14] has been investigated in [8]. [6] presents yet another approach to dimension reduction for multinomials which is based on spherical models, a different type of curved exponential subfamilies than the one presented here which is affine in the mean-value parameterization. 2Notice that graphical models with latent variable are in general stratified exponential families [4], yet in our case the geometry is simpler. The geometrical view also illustrates the well-known identifiability problem in latent class analysis. The interested reader is referred to [3]. As a practical remedy, we have used a Bayesian approach with conjugate (Dirichlet) prior distributions over all multinomials which for the sake of clarity is not described in this paper since it is very technical but nevertheless rather straightforward. Learning the Similarity of Documents 917 3 Fisher Kernel and Information Geometry The Fisher Kernel We follow the work of [9] to derive kernel functions (and hence similarity functions) from generative data models. This approach yields a uniquely defined and intrinsic (i. e. coordinate invariant) kernel, called the Fisher kernel. One important implication is that yardsticks used for statistical models carryover to the selection of appropriate similarity functions. In spite of the purely unsupervised manner in which a Fisher kernel can be learned, the latter is also very useful in supervised learning, where it provides a way to take advantage of additional unlabeled data. This is important in text learning, where digital document databases and the World Wide Web offer a huge background text repository. As a starting point, we partition the data log-likelihood into contributions from the various documents. The average log-probability of a document di , i.e. the probability of all the word occurrences in di normalized by document length is given by, M K l(dd = L F(wjldi ) log L P(WjIZk)P(Zkldi), F(wj Idi) == 2: n(d(, ~j)) (4) j=l k=l m n d" Wm which is up to constants the negative Kullback-Leibler divergence between the empirical distribution F(wjldi ) and the model distribution represented by (1). In order to derive the Fisher kernel, we have to compute the Fisher scores u(di ; 0), i.e. the gradient of l(dd with respect to 0, as well as the Fisher information 1(0) in some parameterization 0 [13]. The Fisher kernel at {) is then given by [9] (5) Computational Considerations For computational reasons we propose to approximate the (inverse) information matrix by the identity matrix, thereby making additional assumptions about information orthogonality. More specifically, we use a variance stabilizing parameterization for multinomials - the square-root parameterization - which yields an isometric embedding of multinomial families on the positive part of a hypersphere [11]. In this parameterization, the above approximation will be exact for the multinomial family (disregarding the normalization constraint). We conjecture that it will also provide a reasonable approximation in the case of the subfamily defined by the latent class model. c. Simplifying assumption: The Fisher information in the square-root parameterization can be approximated by the identity matrix. Interpretation of Results Instead of going through the details of the derivation which is postponed to the end of this section, it is revealing to relate the results back to our main problem of defining a similarity function between text documents. We will have a closer look at the two contributions reSUlting from different sets of parameters. The contribution which stems from (square-root transformed) parameters P(Zk) is (in a simplified version) given by L P(Zk Idi)P(Zk Idn )/ P(Zk) . (6) k J( is a weighted inner product in the low-dimensional factor representation of the documents by mixing weights P(zkldi). This part of the kernel thus computes a "topical" overlap between documents and is thereby able to capture synonyms, i.e., words with an identical or similar meaning, as well as words referring to the same 918 T. Hofmann topic. Notice, that it is not required that di and dn actually have (many) terms in common in order to get a high similarity score. The contribution due to the parameters P(WjIZk) is of a very different type. Again using the approximation of the Fisher matrix, we arrive at the inner product K:(di, do) = l( P(Wj Idi) I'>(wj Ido ) ~ P(zkldi';(2~;:; Ido , Wj) • (7) j( has also a very appealing interpretation: It essentially computes an inner product between the empirical distributions of di and dn , a scheme that is very popular in the context of information retrieval in the vector space model. However, common words only contribute, if they are explained by the same factor(s), i.e., if the respective posterior probabilities overlap. This allows to capture words with multiple meanings, so-called polysems. For example, in the factors displayed in Figure 1 the term "president" occurs twice (as the president of a company and as the president of the US). Depending on the document the word occurs in, the posterior probability will be high for either one of the factors, but typically not for both. Hence, the same term used in different context and different meanings will generally not increase the similarity between documents, a distinction that is absent in the naive inner product which corresponds to the degenerate case of K = 1. Since the choice of K determines the coarseness of the identified "topics" and different resolution levels possibly contribute useful information, we have combined models by a simple additive combination of the derived inner products. This combination scheme has experimentally proven to be very effective and robust. D. Modeling assumption: Similarities derived from latent class decompositions at different levels of resolution are additively combined. In summary, the emergence of important language phenomena like synonymy and polysemy from information-geometric principles is very satisfying and proves in our opinion that interesting similarity functions can be rigorously derived, without specific domain knowledge and based on few, explicitly stated assumptions (A-D). Technical Derivation Define Pjk == 2v'P(wjlzk), then 8l(dj ) oP(WjIZk) = . fp(wjlzk) P(wjldi ) P(zkJdd oP(WjIZk) OPjk V P(wjldi ) P(wjlddP(Zkldi' Wj) v'P(WjIZk) Similarly we define Pk = 2v'P(Zk). Applying Bayes' rule to substitute P(zkldd in l(dd (i.e. P(zkldd = P(zk)P(di/zk)/P(di)) yields 8l(dd 8l(d;) OP(Zk) = v'P(Zk) P(dilzk) ~ P(wjldd P(W 'IZk) OPk OP(Zk) OPk P(dd ~ P(WjJdd J J P(zkJdd ~ P(wj1di)p( I ) P(zkldj ) L...J W· Zk ~ ~==== v'P(Zk) j P(wjldi) J v'P(Zk) . The last (optional) approximation step makes sense whenever P(wjldj ) ~ P(wjldi ). Notice that we have ignored the normalization constraints which would yield a (reactive) term that is constant for each multinomial. Experimentally, we have observed no deterioration in performance by making these additional simplifications. Learning the Similarity of Documents 919 Medline Cranfield CACM CISI VSM 44.3 29.9 17.9 12.7 VSM++ 67.2 37.9 27.5 20.3 Table 1: Average precision results for the vector space baseline method (VSM) and the Fisher kernel approach (VSM ++) for 4 standard test collections, Medline, Cranfield, CACM, and CIS!. I earn acq money grain crude average lmprov. 20x sub SVM 5.51 7.67 3.25 2.06 2.50 4.20 SVM++ 4.56 5.37 2.08 1.71 1.53 3.05 +27.4% kNN 5.91 9.64 3.24 2.54 2.42 4.75 kNN++ 5.05 7.80 3.11 2.35 1.95 4.05 +14.7% lOx sub SVM 4.88 5.54 2.38 1.71 1.88 3.27 SVM++ 4.11 4.84 2.08 1.42 1.45 2.78 +15.0% kNN 5.51 9.23 2.64 2.55 2.42 4.47 kNN++ 4.94 7.47 2.42 2.28 1.88 3.79 +15.2% 5x sub SVM 4.09 4.40 2.10 1.32 1.46 2.67 SVM++ 3.64 4.15 1.78 0.98 1.19 2.35 +12.1% kNN 5.13 8.70 2.27 2.40 2.23 4.14 kNN++ 4.74 6.99 2.22 2.18 1.74 3.57 +13.7% all data SVM 2.92 3.21 1.20 0.77 0.92 1.81 lOx cv SVM++ 2.98 3.15 1.21 0.76 0.86 1.79 +0.6% kNN 4.17 6.69 1.78 1.73 1.42 3.16 kNN++ 4.07 5.34 1.73 1.58 1.18 2.78 +12.0% Table 2: Classification errors for k-nearest neighbors (kNN) SVMs (SVM) with the naive kernel and with the Fisher kernel(++) (derived from J( = 1 and J( = 64 models) on the 5 most frequent categories of the Reuters21578 corpus (earn, acq, monex-fx, grain, and crude) at different subsampling levels. 4 Experimental Results We have applied the proposed method for ad hoc information retrieval, where the goal is to return a list of documents, ranked with respect to a given query. This obviously involves computing similarities between documents and queries. In a follow-up series of experiments to the ones reported in [7] - where kernels K(di , dn ) = ~k P(Zk Idi)P(Zk Idn ) and JC(di' dn ) = ~j P(Wj Idi)P(wjldn ) have been proposed in an ad hoc manner - we have been able to obtain a rigorous theoretical justification as well as some additional improvements. Average precision-recall values for four standard test collections reported in Table 1 show that substantial performance gains can be achieved with the help of a generative model (cf. [7] for details on the conducted experiments). To demonstrate the utility of our method for supervised learning problems, we have applied it to text categorization, using a standard data set in the evaluation, the Reuters21578 collections of news stories. We have tried to boost the performance of two classifiers that are known to be highly competitive for text categorization: the k- nearest neighbor method and Support Vector Machines (SVMs) with a linear kernel [10]. Since we are particularly interested in a setting, where the generative model is trained on a larger corpus of unlabeled data, we have run experiments where the classifier was only trained on a subsample (at subsampling factors 20x,10x,5x). The results are summarized in Table 2. Free parameters of the base classifiers have been optimized in extensive simulations with held-out data. The results indicate 920 T. Hofmann that substantial performance gains can be achieved over the standard k-nearest neighbor method at all subsampling levels. For SVMs the gain is huge on the subsampled data collections, but insignificant for SVMs trained on all data. This seems to indicate that the generative model does not provide any extra information, if the SVM classifier is trained on the same data. However, notice that many interesting applications in text categorization operate in the small sample limit with lots of unlabeled data. Examples include the definition of personalized news categories by just a few example, the classification and/or filtering of email, on-line topic spotting and tracking, and many more. 5 Conclusion We have presented an approach to learn the similarity of text documents from first principles. Based on a latent class model, we have been able to derive a similarity function, that is theoretically satisfying, intuitively appealing, and shows substantial performance gains in the conducted experiments. Finally, we have made a contribution to the relationship between unsupervised and supervised learning as initiated in [9] by showing that generative models can help to exploit unlabeled data for classification problems. References [1] Shun'ichi Amari. Differential-geometrical methods in statistics. Springer-Verlag, Berlin, New York, 1985. [2] S. Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41:391-407, 1990. [3] M. J. Evans, Z. Gilula, and I. Guttman. Latent class analysis of two-way contingency tables by Bayesian methods. Biometrika, 76(3):557-563, 1989. [4] D. Geiger, D. Heckerman, H. King, and C. Meek. Stratified exponential families: Graphical models and model selection. Technical Report MSR-TR-98-31, Microsoft Research, 1998. [5] Z. Gilula and S. J . Haberman. Canonical analysis of contingency tables by maximum likelihood. Journal of the American Statistical Association, 81(395):780-788, 1986. [6] A. Gous. Exponential and Spherical Subfamily Models. PhD thesis, Stanford, Statistics Department, 1998. [7] T. Hofmann. Probabilistic latent semantic indexing. In Proceedings of the 22th International Conference on Research and Development in Information Retrieval (SIGIR), pages 50-57, 1999. [8] T . Hofmann, J. Puzicha, and M. I. Jordan. Unsupervised learning from dyadic data. In Advances in Neural Information Processing Systems 11. MIT Press, 1999. [9] T. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. In Advances in Neural Information Processing Systems 11. MIT Press, 1999. [lO] T. Joachims. Text categorization with support vector machines: Learning with many relevant features. In International Conference on Machine Learning (ECML), 1998. [ll] R.E. Kass and P. W. Vos. Geometrical foundations of asymptotic inference. Wiley, New York, 1997. [12] D. Lee and S. Seung. Learning the parts of objects by non-negative matrix factorization. Nature, 401:788-791, 1999. [13] M. K. Murray and J. W. Rice. Differential geometry and statistics. Chapman & Hall, London, New York, 1993. [14] F.C.N. Pereira, N.Z. Tishby, and L. Lee. Distributional clustering of English words. In Proceedings of the ACL, pages 183- 190, 1993. [15] L. Saul and F . Pereira. Aggregate and mixed-order Markov models for statistical language processing. In Proceedings of the 2nd International Conference on Empirical Methods in Natural Language Processing, 1997.
|
1999
|
95
|
1,748
|
Bayesian Reconstruction of 3D Human Motion from Single-Camera Video Nicholas R. Howe Department of Computer Science Cornell University Ithaca, NY 14850 nihowe@cs.comell.edu Michael E. Leventon Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 leventon@ai.mit.edu William T. Freeman MERL - a Mitsubishi Electric Research Lab 201 Broadway Cambridge, MA 02139 freeman@merL.com Abstract The three-dimensional motion of humans is underdetermined when the observation is limited to a single camera, due to the inherent 3D ambiguity of 2D video. We present a system that reconstructs the 3D motion of human subjects from single-camera video, relying on prior knowledge about human motion, learned from training data, to resolve those ambiguities. After initialization in 2D, the tracking and 3D reconstruction is automatic; we show results for several video sequences. The results show the power of treating 3D body tracking as an inference problem. 1 Introduction We seek to capture the 3D motions of humans from video sequences. The potential applications are broad, including industrial computer graphics, virtual reality, and improved human-computer interaction. Recent research attention has focused on unencumbered tracking techniques that don't require attaching markers to the subject's body [4, 5], see [12] for a survey. Typically, these methods require simultaneous views from multiple cameras. Motion capture from a single camera is important for several reasons. First, though underdetermined, it is a problem people can solve easily, as anyone viewing a dancer in a movie can confirm. Single camera shots are the most convenient to obtain, and, of course, apply to the world's film and video archives. It is an appealing computer vision problem that emphasizes inference as much as measurement. This problem has received less attention than motion capture from multiple cameras. Goncalves et.al. rely on perspective effects to track only a single arm, and thus need not deal with complicated models, shadows, or self-occlusion [7]. Bregler & Malik develop a body tracking system that may apply to a single camera, but performance in that domain is Bayesian Reconstruction of 3D Human Motion from Single-Camera Video 821 not clear; most of the examples use multiple cameras [4]. Wachter & Nagel use an iterated extended Kalman filter, although their body model is limited in degrees of freedom [12l Brand [3] uses an learning-based approach, although with representational expressiveness restricted by the number of HMM states. An earlier version of the work reported here [10] required manual intervention for the 2D tracking. This paper presents our system for single-camera motion capture, a learning-based approach, relying on prior information learned from a labeled training set. The system tracks joints and body parts as they move in the 2D video, then combines the tracking information with the prior model of human motion to form a best estimate of the body's motion in 3D. Our reconstruction method can work with incomplete information, because the prior model allows spurious and distracting information to be discarded. The 3D estimate provides feedback to influence the 2D tracking process to favor more likely poses. The 2D tracking and 3D reconstruction modules are discussed in Sections 3 and 4, respectively. Section 4 describes the system operation and presents performance results. Finally, Section 5 concludes with possible improvements. 2 2D Tracking The 2D tracker processes a video stream to determine the motion of body parts in the image plane over time. The tracking algorithm used is based on one presented by Ju et. al. [9], and performs a task similar to one described by Morris & Rehg [11]. Fourteen body parts are modeled as planar patches, whose positions are controlled by 34 parameters. Tracking consists of optimizing the parameter values in each frame so as to minimize the mismatch between the image data and a projection of the body part maps. The 2D parameter values for the first frame must be initialized by hand, by overlaying a model onto the 2D image of the first frame. We extend Ju et. al.'s tracking algorithm in several ways. We track the entire body, and build a model of each body part that is a weighted average of several preceding frames, not just the most recent one. This helps eliminate tracking errors due to momentary glitches that last for a frame or two. We account for self-occlusions through the use of support maps [4, 1]. It is essential to address this problem, as limbs and other body parts will often partly or wholly obscure one another. For the single-camera case, there are no alternate views to be relied upon when a body part cannot be seen. The 2D tracker returns the coordinates of each limb in each successive frame. These in tum yield the positions of joints and other control points needed to perform 3D reconstruction. 3 3D Reconstruction 3D reconstruction from 2D tracking data is underdetermined. At each frame, the algorithm receives the positions in two dimensions of 20 tracked body points, and must to infer the correct depth of each point. We rely on a training set of 3D human motions to determine which reconstructions are plausible. Most candidate projections are unnatural motions, if not anatomically impossible, and can be eliminated on this basis. We adopt a Bayesian framework, and use the training data to compute prior probabilities of different 3D motions. We model plausible motions as a mixture of Gaussian probabilities in a high-dimensional space. Motion capture data gathered in a professional studio provide the training data: frame-by-frame 3D coordinates for 20 tracked body points at 20-30 frames per second. We want to model the probabilities of human motions of some short duration, long enough be 822 N. R. Howe, M. E. Leventon and W T. Freeman informative, but short enough to characterize probabilistically from our training data. We assembled the data into short motion elements we caJled snippets of 11 successive frames, about a third of a second. We represent each snippet from the training data as a large column vector of the 3D positions of each tracked body point in each frame of the snippet. We then use those data to build a mixture-of-Gaussians probability density model [2]. For computational efficiency, we used a clustering approach to approximate the fitting of an EM algorithm. We use k-means clustering to divide the snippets into m groups, each of which will be modeled by a Gaussian probability cloud. For each cluster, the matrix M j is formed, where the columns of M j are the nj individual motion snippets after subtracting the mean J.l j. The singular value decomposition (SVD) gives M j = Uj Sj VI, where Sj contains the singular values along the diagonal, and Uj contains the basis vectors. (We truncate the SVD to include only the 50 largest singular values.) The cluster can be modeled by a multidimensional Gaussian with covariance Aj = ;j UjSJUJ. The prior probability of a snippet x over all the models is a sum of the Gaussian probabilities weighted by the probability of each model. m P(x) = Lk7fje- !(x-llj)TA- 1(X-llj) (1) j=1 Here k is a normalization constant, and 7f j is the a priori probability of model j, computed as the fraction of snippets in the knowledge base that were originally placed in cluster j . Given this approximately derived mixture-of-factors model [6], we can compute the prior probability of any snippet. To estimate the data term (likelihood) in Bayes' law, we assume that the 2D observations include some Gaussian noise with variance (T. Combined with the prior, the expression for the probability of a given snippet x given an observation ybecomes p(x,e,s,vly) = k' (e-IIY-R6 , •. v(XlII 2/(2tr2)) (f k7fj e-!(X-llj)T A _l(X-llj)) (2) J=l In this equation, Rn,s,ii(X) is a rendering function which maps a 3D snippet x into the image coordinate system, performing scaling s, rotation about the vertical axis e, and image-plane translation v. We use the EM algorithm to find the probabilities of each Gaussian in the mixture and the corresponding snippet x that maximizes the probability given the observations [6]. This allows the conversion of eleven frames of 2D tracking measurements into the most probable corresponding 3D snippet. In cases where the 2D tracking is poor, the reconstruction may be improved by matching only the more reliable points in the likelihood term of Equation 2. This adds a second noise process to explain the outlier data points in the likelihood term. To perform the full 3D reconstruction, the system first divides the 2D tracking data into snippets, which provides the y values of Eq. 2, then finds the best (MAP) 3D snippet for each of the 2D observations. The 3D snippets are stitched together, using a weighted interpolation for frames where two snippets overlap. The result is a Bayesian estimate of the subject's motion in three dimensions. 4 Performance The system as a whole will track and successfully 3D reconstruct simple, short video clips with no human intervention, apart from 2D pose initialization. It is not currently reliable enough to track difficult footage for significant lengths of time. However, analysis of short clips demonstrates that the system can successfully reconstruct 3D motion from ambiguous Bayesian Reconstruction of 3D Human Motion from Single-Camera Video 823 2D video. We evaluate the two stages of the algorithm independently at first, and then consider their operation as a system. 4.1 Performance of the 3D reconstruction The 3D reconstruction stage is the heart of the system. To our knowledge, no similar 2D to 3D reconstruction technique relying on prior infonnation has been published. ([3], developed simultaneously, also uses an inference-based approach). Our tests show that the module can restore deleted depth infonnation that looks realistic and is close to the ground truth, at least when the knowledge base contains some examples of similar motions. This makes the 3D reconstruction stage itself an important result, which can easily be applied in conjunction with other tracking technologies. To test the reconstruction with known ground truth, we held back some of the training data for testing. We artificially provided perfect 2D marker position data, yin Eq. 2, and tested the 3D reconstruction stage in isolation. After removing depth information from the test sequence, the sequence is reconstructed as if it had come from the 2D tracker. Sequences produced in this manner look very much like the original. They show some rigid motion error along the line of sight. An analysis of the uncertainty in the posterior probability predicts high uncertainty for the body motion mode of rigid motion parallel to the orthographic projection [10]. This slipping can be corrected by enforcing ground-contact constraints. Figure 1 shows a reconstructed running sequence corrected for rigid motion error and superimposed on the original. The missing depth information is reconstructed well, although it sometimes lags or anticipates the true motion slightly. Quantitatively, this error is a relatively small effect. After subtracting rigid motion error, the mean residual 3D errors in limb position are the same order of magnitude as the small frame-to frame changes in those positions. ~ ' " -, . _. ~ .~ ..• Figure 1: Original and reconstructed running sequences superimposed (frames 1, 7, 14, and 21). 4.2 Performance of the 2D tracker The 2D tracker performs well under constant illumination, providing quite accurate results from frame to frame. The main problem it faces is the slow accumulation of error. On longer sequences, the errors can build up to the point where the module is no longer tracking the body parts it was intended to track. The problem is worsened by low contrast, occlusion and lighting changes. More careful body modeling [5], lighting models, and modeling of the background may address these issues. The sequences we used for testing were several seconds long and had fairly good contrast. Although adequate to demonstrate the operation of our system, the 2D tracker contains the most open research issues. 4.3 Overall system performance Three example reconstructions are given, showing a range of different tracking situations. The first is a reconstruction of a stationary figure waving one arm, with most of the motion 824 N. R. Howe. M E. Leventon and W. T. Freeman in the image plane. The second shows a figure bringing both arms together towards the camera, resulting in a significant amount of foreshortening. The third is a reconstruction of a figure walking sideways, and includes significant self-occlusion Figure 2: First clip and its reconstruction (frames 1, 2S, SO, and 7S). The first video is the easiest to track because there is little or no occlusion and change in lighting. The reconstruction is good, capturing the stance and motion of the arm. There is some rigid motion error, which is corrected through ground friction constraints. The knees are slightly bent; this may be because the subject in the video has different body proportions than those represented in the training database. Figure 3: Second clip and its reconstruction (frames 1, 2S, SO, and 7S). The second video shows a figure bringing its arms together towards the camera. The only indication of this is in the foreshortening of the limbs, yet the 3D reconstruction correctly captures this in the right arm. (Lighting changes and contrast problems cause the 2D tracker to lose the left arm partway through, confusing the reconstruction of that limb, but the right arm is tracked accurately throughout.) The third video shows a figure walking to the right in the image plane. This clip is the hardest for the 2D tracker, due to repeated and prolonged occlusion of some body parts. The tracker loses the left arm after IS frames due to severe occlusion, yet the remaining tracking information is still sufficient to perform an adequate reconstruction. At about frame 4S, the left leg has crossed behind the right several times and is lost, at which point the reconstruction quality begins to degrade. The key to a more reliable reconstruction on this sequence is better tracking. Bayesian Reconstruction of 3D Human Motion from Single-Camera Video 825 .. . .... .. ,t : '" , ~ 1. Figure 4: Third clip and its reconstruction (frames 6, 16, 26, and 36). 5 Conclusion We have demonstrated a system that tracks human figures in short video sequences and reconstructs their motion in three dimensions. The tracking is unassisted, although 2D pose initialization is required. The system uses prior information learned from training data to resolve the inherent ambiguity in going from two to three dimensions, an essential step when working with a single-camera video source. To achieve this end, the system relies on prior knowledge, extracted from examples of human motion. Such a learning-based approach could be combined with more sophisticated measurement-based approaches to the tracking problem [12, 8, 4]. References [1] J. R. Bergen, P. Anandan, K. J. Hanna, and R. Hingorani. Hierarchical model-based motion estimation. In European Conference on Computer Vision, pages 237-252, 1992. [2] C. M. Bishop. Neural networks for pattern recognition. Oxford, 1995. [3] M. Brand. Shadow puppetry. In Proc. 7th IntI. Con! on Computer Vision, pages 1237-1244. IEEE, 1999. [4] c. Bregler and 1. Malik. Tracking people with twists and exponential maps. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Santa Barbera, 1998. [5] D. M. Gavrila and L. S. Davis. 3d model-based tracking of humans in action: A multi-view approach. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, 1996. [6] Z. Ghahramani and G. E. Hinton. The EM algorithm for mixtures offactor analyzers. Technical report, Department of Computer Science, University of Toronto, May 21 1996. (revised Feb. 27, 1997). [7] L. Goncalves, E. Di Bernardo, E. Ursella, and P. Perona. Monocular tracking of the human arm in 3D. In Proceedings of the Third International Conference on Computer Vision, 1995. [8] M. Isard and A. Blake. Condensation - conditional density propagation for visual tracking. International Journal of Computer Vision, 29( 1 ):5-28, 1998. [9] S. X. Ju, M. J. Black, and Y. Yacoob. Cardboard people: A parameterized model of articulated image motion. In 2nd International Conference on Automatic Face and Gesture Recognition, 1996. 826 N. R. Howe, M. E. Leventon and W T. Freeman [10] M. E. Leventon and W. T. Freeman. Bayesian estimation of 3-d human motion from an image sequence. Technical Report TR98-06, Mitsubishi Electric Research Lab, 1998. [11] D. D. Morris and 1. Rehg. Singularity analysis for articulated object tracking. In IEEE Computer Societ), Conference on Computer Yz'sion and Pattern Recognition, Santa Barbera, 1998. [12] S. Wachter and H.-H. Nagel. Tracking of persons in monocular image sequences. In Nonrigid and ArticuLated Motion Workshop, 1997.
|
1999
|
96
|
1,749
|
Mixture Density Estimation Jonathan Q. Li Department of Statistics Yale University P.O. Box 208290 New Haven, CT 06520 Qiang.Li@aya.yale. edu Abstract Andrew R. Barron Department of Statistics Yale University P.O. Box 208290 New Haven, CT 06520 Andrew. Barron@yale. edu Gaussian mixtures (or so-called radial basis function networks) for density estimation provide a natural counterpart to sigmoidal neural networks for function fitting and approximation. In both cases, it is possible to give simple expressions for the iterative improvement of performance as components of the network are introduced one at a time. In particular, for mixture density estimation we show that a k-component mixture estimated by maximum likelihood (or by an iterative likelihood improvement that we introduce) achieves log-likelihood within order 1/k of the log-likelihood achievable by any convex combination. Consequences for approximation and estimation using Kullback-Leibler risk are also given. A Minimum Description Length principle selects the optimal number of components k that minimizes the risk bound. 1 Introduction In density estimation, Gaussian mixtures provide flexible-basis representations for densities that can be used to model heterogeneous data in high dimensions. We introduce an index of regularity C f of density functions f with respect to mixtures of densities from a given family. Mixture models with k components are shown to achieve Kullback-Leibler approximation error bounded by c}/k for every k. Thus in a manner analogous to the treatment of sinusoidal and sigmoidal networks in Barron [1],[2], we find classes of density functions f such that reasonable size networks (not exponentially large as function of the input dimension) achieve suitable approximation and estimation error. Consider a parametric family G = {<pe(x) , x E X C Rd' : fJ E e c Rd} of probability density functions parameterized by fJ E e. Then consider the class C = CONV(G) of density functions for which there is a mixture representation of the form fp(x) = Ie <pe(x)P(dfJ) (1) where <pe(x) are density functions from G and P is a probability measure on e. The main theme of the paper is to give approximation and estimation bounds of arbitrary densities by finite mixture densities. We focus our attention on densities 280 J. Q. Li and A. R. Barron inside C first and give an approximation error bound by finite mixtures for arbitrary f E C. The approximation error is measured by Kullback-Leibler divergence between two densities, defined as DUllg) = J f(x) log[f(x)jg(x)]dx. (2) In density estimation, D is more natural to use than the L2 distance often seen in the function fitting literature. Indeed, D is invariant under scale transformations (and other 1-1 transformation of the variables) and it has an intrinsic connection with Maximum Likelihood, one of the most useful methods in the mixture density estimation. The following result quantifies the approximation error. THEOREM 1 Let G = {4>8(X) : 0 E 8} and C= CONV(G). Let f(x) J 4>8 (x)P(dO) E C. There exists fk' a k-component mixture of 4>8, such that In the bound, we have 2 J J 4>~(x)P(dO) Cf = J 4>8 (x)P(dO) dx, and 'Y = 4[log(3Ve) + a], where a = sup log 4>81 (x) . 81,82,X 4>82 (x) (3) (4) (5) Here, a characterizes an upper bound of the log ratio of the densities in G, when the parameters are restricted to 8 and the variable to X . Note that the rate of convergence, Ijk, is not related to the dimensions of 8 or X. The behavior of the constants, though, depends on the choices of G and the target f· For example we may take G to be the Gaussian location family, which we restrict to a set X which is a cube of side-length A. Likewise we restrict the parameters to be in the same cube. Then, In this case, a is linear in dimension. dA2 a<-2· (7 (6) The value of c} depends on the target density f. Suppose f is a finite mixture with M components, then C} ~ M, with equality if and only if those M components are disjoint. Indeed, f(x) = E!l Pi 4>8; (x), then Pi 4>8; (x)j E!l Pi 4>8; (x) ~ 1 and hence "",M M c} = J L..-i=l CI;;4>8;.(X)) 4>8; (x) dx ~ J I)I)4>8; (x)dx = M. Ei=l Pt4>8; (x) i=l (7) suppose (8) Genovese and Wasserman [3] deal with a similar setting. A Kullback-Leibler approximation bound of order IjVk for one-dimensional mixtures of Gaussians is given by them. In the more general case that f is not necessarily in C, we have a competitive optimality result. Our density approximation is nearly at least as good as any gp in C. Mixture Density Estimation THEOREM 2 For every gp(x) = f ¢o(x)P(d8), c2 DUIlIk) ~ DUllgp) + ~p 'Y. Here, 2 J f ¢~(x)P(d8) C/,P = (J ¢o(x)P(d8))2 f(x)dx. In particular, we can take infimum over all gp E C, and still obtain a bound. 281 (9) (10) Let DUIIC) = infgEc DUlIg). A theory of information projection shows that if there exists a sequence of fk such that DUllfk) -t DUIIC), then fk converges to a function 1*, which achieves DUIIC). Note that 1* is not necessarily an element in C. This is developed in Li[4] building on the work of Bell and Cover[5]. As a consequence of Theorem 2 we have (11) where c},* is the smallest limit of cJ,p for sequences of P achieving DUlIgp) that approaches the infimum DUIIC). We prove Theorem 1 by induction in the following section. An appealing feature of such an approach is that it provides an iterative estimation procedure which allows us to estimate one component at a time. This greedy procedure is shown to perform almost as well as the full-mixture procedures, while the computational task of estimating one component is considerably easier than estimating the full mixtures. Section 2 gives the iterative construction of a suitable approximation, while Section 3 shows how such mixtures may be estimated from data. Risk bounds are stated in Section 4. 2 An iterative construction of the approximation We provide an iterative construction of Ik's in the following fashion. Suppose during our discussion of approximation that f is given. We seek a k-component mixture fk close to f. Initialize h by choosing a single component from G to minimize DUllh) = DUII¢o). Now suppose we have fk-l(X). Then let fk(X) = (1 - a)fk-l(X) + a¢o(x) where a and 8 are chosen to minimize DUIIIk). More generally let Ik be any sequence of k-component mixtures, for k = 1,2, ... such that DUIIIk) ~ mina,o DUII(l - a)fk-l + a¢o). We prove that such sequences Ik achieve the error bounds in Theorem 1 and Theorem 2. Those familiar with the iterative Hilbert space approximation results of Jones[6], Barron[l]' and Lee, Bartlett and Williamson[7], will see that we follow a similar strategy. The use of L2 distance measures for density approximation involves L2 norms of component densities that are exponentially large with dimension. Naive Taylor expansion of the Kullback-Leibler divergence leads to an L2 norm approximation (weighted by the reciprocal of the density) for which the difficulty remains (Zeevi & Meir[8], Li[9]). The challenge for us was to adapt iterative approximation to the use of Kullback-Leibler divergence in a manner that permits the constant a in the bound to involve the logarithm of the density ratio (rather than the ratio itself) to allow more manageable constants. 282 J. Q. Li and A. R. Barron The proof establishes the inductive relationship Dk ::; (1 - a)Dk- 1 + 0.2 B, (12) where B is bounded and Dk = DUllfk). By choosing 0.1 = 1,0.2 = 1/2 and thereafter ak = 2/k, it's easy to see by induction that Dk ::; 4B/k. To get (12), we establish a quadratic upper bound for -log tr -log «1-0:)"',_1+0:¢e). Three key analytic inequalities regarding to the logarithm will be handy for us, -log(r) ::; -(r - 1) + [-log(ro) + ro - l](r _ 1)2 (13) (ro - 1)2 for r ~ ro > 0, and 2[ -log(r) + r - 1] I ::; og r, r-l -log(r)+r-l<I/2 I -() (r _ 1)2 + og r (14) (15) where log- (-) is the negative part of the logarithm. The proof of of inequality (13) is done by verifying that -lo(~(.:it!-1 is monotone decreasing in r. Inequalities (14) and (15) are shown by separately considering the cases that r < 1 and r > 1 (as well as the limit as r -+ 1). To get the inequalities one multiplies through by (r -1) or (r - 1)2, respectively, and then takes derivatives to obtain suitable monotonicity in r as one moves away from r = 1. Now apply the inequality (13) with r = (1-0:)"'_1 +o:¢e and ro = (1-0:)'k-1, where 9 9 9 is an arbitrary density in C with 9 = J ¢9P(d9). Note that r ~ ro in this case because o:te ~ O. Plug in r = ro + a~ at the right side of (13) and expand the square. Then we get -log(r) < -(ro + a: _ 1) + [-IO~~o~ ~fo -1][(ro - 1) + (ag¢W a¢ I () 2¢2[-log(ro) +ro -1] 2 ¢[-log(ro) +ro -1] - og ro + a + 0.. 9 g2 (ro - 1) 2 9 ro - 1 Now apply (14) and (15) respectively. We get a¢ 2¢2 _ ¢ -log(r) ::; -log(ro) - + a 2(1/2 + log (ro)) + a-Iog(ro). (16) 9 9 9 Note that in our application, ro is a ratio of densities in C. Thus we obtain an upper bound for log-(ro) involving a. Indeed we find that (1/2 + log-(ro)) ::; "1/4 where "I is as defined in the theorem. In the case that f is in C, we take 9 = f. Then taking the expectation with respect to f of both sides of (16), we acquire a quadratic upper bound for Dk, noting that r = tr. Also note that D k is a function of 9. The greedy algorithm chooses 9 to minimize Dk(9). Therefore Dk ::; mjnDk(9) ::; / Dk(9)P(d9). (17) Plugging the upper bound (16) for Dk(9) into (17), we have Dk::; ( ([-log(ro)- a¢ +a2¢:("f/4)+a~log(ro)]J(x)dxP(d9). 19 Ix 9 9 9 (18) Mixture Density Estimation 283 where TO = (1 - a)fk-1 (x)jg(x) and P is chosen to satisfy Ie ¢>e(x)P(dO) = g(x). Thus 2! ¢>~(x)P(dO) Dk ~ (1- a)Dk- 1 + a (g(x))2 f(x)dx{rj4) + a log(l- a) - a -log(l- a). (19) It can be shown that alog(l- a) - a -log(l - a) ~ O. Thus we have the desired inductive relationship, 'Yc2 Therefore, Dk ~ f. (20) In the case that f does not have a mixture representation of the form I ¢>eP(dO), i.e. f is outside the convex hull C, we take Dk to be I f(x) log j:f:? dx for any given gp(x) = I ¢>e(x)P(dO). The above analysis then yields Dk = DUllfk) -DUllgp) ::; 'Yc2 f as desired. That completes the proof of Theorems 1 and 2. 3 A greedy estimation procedure The connection between the K-L divergence and the MLE helps to motivate the following estimation procedure for /k if we have data Xl, ... , Xn sampled from f. The iterative construction of fk can be turned into a sequential maximum likelihood estimation by changing min DUllfk) to max 2:~1 log fk (Xi) at each step. A surprising result is that the resulting estimator A has a log likelihood almost at least as high as log likelihood achieved by any density gp in C with a difference of order 1jk. We formally state it as n n 2 1 '" ~ 1 '" cF P ~ ~logfk(Xi) ~ ~ ~IOg9p(Xi) - 'k 1=1 1=1 (21) for all gp E C. Here Fn is the empirical distribution, for which c2 Fn,P (ljn) 2:~=1 c~;,p where 2 I ¢>Hx)P(dO) Cx,P = (f ¢>e(x)P(dO))2 . (22) The proof of this result (21) follows as in the proof in the last section, except that now we take Dk = EFn loggp(X)j fk(X) to be the expectation with respect to Fn instead of with respect to the density f. Let's look at the computation at each step to see the benefits this new greedy procedure can bring for us. We have ik(X) = (1- a)ik-1(X) + a¢>e(x) with 0 and a chosen to maximize n L log[(l a)f~-l (Xi) + a¢>e(Xi)] (23) i=l which is a simple two component mixture problem, with one of the two components, f~-l(X), fixed. To achieve the bound in (21), a can either be chosen by this iterative maximum likelihood or it can be held fixed at each step to equal ak (which as before is ak = 2jk for k > 2). Thus one may replace the MLE-computation of a kcomponent mixture by successive MLE-computations of two-component mixtures. The resulting estimate is guaranteed to have almost at least as high a likelihood as is achieved by any mixture density. 284 J. Q. Li and A. R. Barron A disadvantage of the greedy procedure is that it may take a number of steps to adequately downweight poor initial choices. Thus it is advisable at each step to retune the weights of convex combinations of previous components (and even perhaps to adjust the locations of these components), in which case, the result from the previous iterations (with k - 1 components) provide natural initialization for the search at step k. The good news is that as long as for each k, given ik-l, the A is chosen among k component mixtures to achieve likelihood at least as large as the choice achieving maxol:~=llog[(l - ak)fk-l (Xi) + ak<Po(Xi )), that is, we require that n n L log f~(Xd ~ mt" L log[(l- ak)f~-l (Xi) + ak<Po(Xi )), (24) i=l i=l then the conclusion (21) will follow. In particular, our likelihood results and risk bound results apply both to the case that A is taken to be global maximizer of the likelihood over k-component mixtures as well as to the case that ik is the result of the greedy procedure. 4 Risk bounds for the MLE and the iterative MLE The metric entropy of the family G is controlled to obtain the risk bound and to determine the precisions with which the coordinates of the parameter space are allowed to be represented. Specifically, the following Lipschitz condition is assumed: for (} E e c Rd and x E X C Rd, d sup I log <PO (x) -log <Po' (x)1 ~ B L IOj - 0jl xEX j=l (25) where OJ is the j-th coordinate of the parameter vector. Note that such a condition is satisfied by a Gaussian family with x restricted to a cube with sidelength A and has a location parameter 0 that is also prescribed to be in the same cube. In particular, if we let the variance be 0'2, we may set B = 2AI 0'2. Now we can state the bound on the K-L risk of A. THEOREM 3 Assume the condition {25}. Also assume e to be a cube with sidelength A. Let ik(X) be either the maximizer of the likelihood over k-component mixtures or more generally any sequence of density estimates f~ satisfying {24}. We have 2 A 2 cf .. 2kd E(DUllfk)) - DUIIC) ~ 'Y k + 'Y-:;;: log(nABe). (26) From the bound on risk, a best choice of k would be of order roughly Vn leading to a bound on ED(fllf~) - DUIIC) of order 1/Vn to within logarithmic factors. However the best such bound occurs with k = 'Ycf, .. VnI.j2dlog(nABe) which is not available when the value of cf, .. is unknown. More importantly, k should not be chosen merely to optimize an upper bound on risk, but rather to balance whatever approximation and estimation sources of error actually occur. Toward this end we optimize a penalized likelihood criterion related to the minimum description length principle, following Barron and Cover [10]. Let l(k) be a function of k that satisfies l:~l e-l(k) ::; 1, such as l(k) = 2Iog(k+ 1). Mixture Densiry Estimation A penalized MLE (or MDL) procedure picks k by minimizing ! t log A 1 + 2kdlog(nABe) + 21(k)jn. n i=l h(Xi ) n Then we have 2 285 (27) A 2 Cf * 2kd E(DUllh)) - DUlle) ~ m1nb k + r-;-log(nABe) + 21(k)jn}. (28) A proof of these risk bounds is given in Li[4]. It builds on general results for maximum likelihood and penalized maximum likelihood procedures. Recently, Dasgupta [11] has established a randomized algorithm for estimating mixtures of Gaussians, in the case that data are drawn from a finite mixture of sufficiently separated Gaussian components with common covariance, that runs in time linear in the dimension and quadratic in the sample size. However, present forms of his algorithm require impractically large sample sizes to get reasonably accurate estimates of the density. It is not yet known how his techniques will work for more general mixtures. Here we see that iterative likelihood maximization provides a better relationship between accuracy, sample size and number of components. References [1] Barron, Andrew (1993) Universal Approximation Bounds for Superpositions of a Sigmoidal Function. IEEE Transactions on Information Theory 39, No.3: 930-945 [2] Barron, Andrew (1994) Approximation and Estimation Bounds for Artificial Neural Networks. Machine Learning 14: 115-133. [3] Genovese, Chris and Wasserman, Larry (1998) Rates of Convergence for the Gaussian Mixture Seive. Manuscript. [4] Li, Jonathan Q. (1999) Estimation of Mixture Models. Ph.D Dissertation. The Department of Statistics. Yale University. [5] Bell, Robert and Cover, Thomas (1988) Game-theoretic optimal portfolios. Management Science 34: 724-733. [6] Jones, Lee (1992) A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Annals of Statistics 20: 608-613. [7] Lee, W.S., Bartlett, P.L. and Williamson R.C. (1996) Efficient Agnostic Learning of Neural Networks with Bounded Fan-in. IEEE Transactions on Information Theory 42, No.6: 2118-2132. [8] Zeevi, Assaf and Meir Ronny (1997) Density Estimation Through Convex Combinations of Densities: Approximation and Estimation Bounds. Neural Networks 10, No.1: 99-109. [9] Li, Jonathan Q. (1997) Iterative Estimation of Mixture Models. Ph.D. Prospectus. The Department of Statistics. Yale University. [10] Barron, Andrew and Cover, Thomas (1991) Minimum Complexity Density Estimation. IEEE Transactions on Information Theory 37: 1034-1054. [11] Dasgupta, Sanjoy (1999) Learning Mixtures of Gaussians. Pmc. IEEE Conf. on Foundations of Computer Science, 634-644.
|
1999
|
97
|
1,750
|
Information Capacity and Robustness of Stochastic Neuron Models Elad Schneidman Idan Segev N aftali Tishby Institute of Computer Science, Department of Neurobiology and Center for Neural Computation, Hebrew University Jerusalem 91904, Israel { elads, tishby} @cs.huji.ac.il, idan@lobster.ls.huji.ac.il Abstract The reliability and accuracy of spike trains have been shown to depend on the nature of the stimulus that the neuron encodes. Adding ion channel stochasticity to neuronal models results in a macroscopic behavior that replicates the input-dependent reliability and precision of real neurons. We calculate the amount of information that an ion channel based stochastic Hodgkin-Huxley (HH) neuron model can encode about a wide set of stimuli. We show that both the information rate and the information per spike of the stochastic model are similar to the values reported experimentally. Moreover, the amount of information that the neuron encodes is correlated with the amplitude of fluctuations in the input, and less so with the average firing rate of the neuron. We also show that for the HH ion channel density, the information capacity is robust to changes in the density of ion channels in the membrane, whereas changing the ratio between the Na+ and K+ ion channels has a considerable effect on the information that the neuron can encode. Finally, we suggest that neurons may maximize their information capacity by appropriately balancing the density of the different ion channels that underlie neuronal excitability. 1 Introduction The capacity of neurons to encode information is directly connected to the nature of spike trains as a code. Namely, whether the fine temporal structure of the spik~ train carries information or whether the fine structure of the train is mainly noise (see e.g. [1, 2]). Experimental studies show that neurons in vitro [3, 4] and in vivo [5, 6, 7], respond to fluctuating inputs with repeatable and accurate spike trains, whereas slowly varying inputs result in lower repeatability and 'jitter' in the spike timing. Hence, it seems that the nature of the code utilized by the neuron depends on the input that it encodes [3, 6]. Recently, we suggested that the biophysical origin of this behavior is the stochasCapacity and Robustness oJStochastic Neuron Models 179 ticity of single ion channels. Replacing the average conductance dynamics in the Hodgkin-Huxley (HH) model [8], with a stochastic channel population dynamics [9, 10, 11], yields a stochastic neuron model which replicates rather well the spike trains' reliability and precision of real neurons [12]. The stochastic model also shows subthreshold oscillations, spontaneous and missing spikes, all observed experimentally. Direct measurement of membranal noise has also been replicated successfully by such stochastic models [13]. Neurons use many tens of thousands of ion channels to encode the synaptic current that reaches the soma into trains of spikes [14]. The number of ion channels that underlies the spike generation mechanism, and their types, depend on the activity of the neuron [15, 16]. It is yet unclear how such changes may affect the amount and nature of the information that neurons encode. Here we ask what is the information encoding capacity of the stochastic HH model neuron and how does this capacity depend on the densities of different of ion channel types in the membrane. We show that both the information rate and the information per spike of the stochastic HH model are similar to the values reported experimentally and that neurons encode more information about highly fluctuating inputs. The information encoding capacity is rather robust to changes in the channel densities of the HH model. Interestingly, we show that there is an optimal channel population size, around the natural channel density of the HH model. The encoding capacity is rather sensitive to changes in the distribution of channel types, suggesting that changes in the population ratios and adaptation through channel inactivation may change the information content of neurons. 2 The Stochastic HH Model The stochastic HH (SHH) model expands the classic HH model [8], by incorporating the stochastic nature of single ion channels [9, 17]. Specifically, the membrane voltage dynamics is given by the HH description, namely, dV cmTt = -gLCV - VL) - gK(V, t)(V - VK) - gNa(V, t)(V - VNa) + I (1) where V is the membrane potential, VL, VK and VNa are the reversal potentials of the leakage, potassium and sodium currents, respectively, gL, gK(V, t) and gNa(V, t) are the corresponding ion conductances, Cm is the membrane capacitance and I is the injected current. The ion channel stochasticity is introduced by replacing the equations describing the ion channel conductances with explicit voltage-dependent Markovian kinetic models for single ion channels [9, 10]. Based on the activation and inactivation variables of the deterministic HH model, each K+ channel can be in one of five different states, and the rates for transition between these states are given in the following diagram, [ ] 4Qn [ ] 3an [ ] 2an [ ] an [ ] ~ ~ ~ ~ ~ ~ ~ ~ ~ f3n 2f3n 3f3n 4f3n (2) where [nj] refers to the number of channels which are currently in the state nj. Here [n4] labels the single open state of a potassium channel, and an, i3n, are the voltage-dependent rate-functions in the HH formalism. A similar_model is used for the Na+ channel (The Na+ kinetic model has 8 states, with only one open state, see [12] for details). The potassium and sodium membrane conductances are given by, gK(V, t) = ,K [114] gNa(V, t) = ,Na [mahl] (3) where ,K and ,Na are the conductances of an ion channel for the K+ and Na+ respectively. We take the conductance of a single channel to be 20pS [14] for both the 180 E. Schneidman. l. Segev and N. Tishby K+ and Na+ channel types 1. Each of the ion channels will thus respond stochastically by closing or opening its 'gates' according to the kinetic model, fluctuating around the average expected behavior. Figure 1 demonstrates the effect of the ion A B Figure 1: Reliability of firing patterns in a model of an isopotential Hodgkin-Huxley membrane patch in response to different current inputs. (A) Injecting a slowly changing current input (low-pass Gaussian white noise with a mean TJ = 8I1A/cm2 , and standard deviation a = 1 p,A/ cm2 which was convolved with an 'alpha-function' with a time constant To = 3 msec, top frame), results in high 'jitter' in the timing of the spikes (raster plots of spike responses, bottom frame). (B) The same patch was again stimulated repeatedly, with a highly fluctuating stimulus (TJ = 8 p,A/cm2 , a = 7 p,A/cm2 and To = 3 msec, top frame) The 'jitter' in spike timing is significantly smaller in B than in A (i.e. increased reliability for the fluctuating current input). Patch area used was 200 p,m2 , with 3,600 K+ channels and 12,000 Na+ channels. (Compare to Fig.l in see [3]). (C) Average firing rate in response to DC current input of both the HH and the stochastic HH model. (D) Coefficient of variation of the inter spike interval of the SHH model in response to DC inputs, giving values which are comparable to those observed in real neurons channel stochasticity, showing the response of a 200 J.Lm2 SHH isopotential membrane patch (with the 'standard' SHH channel densities) to repeated presentation of supra threshold current input. When the same slowly varying input is repeatedly presented (Fig. lA), the spike trains are very different from each other, i.e., spike firing time is unreliable. On the other hand, when the input is highly fluctuating (Fig. IB), the reliability of the spike timing is relatively high. The stochastic model thus replicates the input-dependent reliability and precision of spike trains observed in pyramidal cortical neurons [3]. As for cortical neurons, the Repeatability and Precision of the spike trains of the stochastic model (defined in [3]) are strongly correlated with the fluctuations in the current input and may get to sub-millisecond precision [12]. The f-I curve of the stochastic model (Fig. lC) and the coefficient of variation (CV) of the inter-spike intervals (lSI) distribution for DC inputs (Fig. ID) are both similar to the behavior of cortical neurons in vivo [18], in clear contrast to the deterministic model 2 1 The number of channels is thus the ratio between the total conductance of a single type of ion channels and the single channel conductance, and so the 'standard' SHH densities will be 60 Na+ and 18 Na+ channels per p,m2 . 2 Although the total number of channels in the model is very large, the microscopic level ion channel noise has a macroscopic effect on the spike train reliability, since the number Capacity and Robustness of Stochastic Neuron Models 181 3 The Information Capacity of the SHH Neuron Expanding the Repeatability and Precision measures [3], we turn to quantify how much information the neuron model encodes about the stimuli it receives. We thus present the model with a set of 'representative' input current traces, and the amount of information that the respective spike trains encode is calculated. Following Mainen and Sejnowski [3], we use a set of input current traces which imitate the synaptic current that reaches the soma from the dendritic tree. We convolve a Gaussian white noise trace (with a mean current 1} and standard deviation 0') with an alpha function (with a To: = 3 msec). Six different mean current values are used (1} = 0,2,4,6,8,10 pA/cm2 ) , and five different std values (0' = 1,3,5,7, 9pA/cm2 ), yielding a set of 30 input current traces (each is 10 seconds long). This set of inputs is representative of the wide variety of current traces that neurons might encounter under in vivo conditions in the sense that the average firing rates for this set of inputs which range between 2 - 70 Hz (not shown). We present these input traces to the model, and calculate the amount of information that the resulting spike trains convey about each input, following [6, 19]. Each input is presented repeatedly and the resulting spike trains are discretized in D..T bins, using a sliding 'window' of size T along the discretized sequence. Each train of spikes is thus transformed into a sequence of K-letter 'words' (K = T/D..T) , consisting of O's (no spike) and l's (spike). We estimate P(W), the probability of the word W to appear in the spike trains, and then compute the entropy rate of its total word distribution, Htotal = - L P(W) log2 P(W) W bits/word (4) which measures the capacity of information that the neuron spike trains hold [20, 6, 19]. We then examine the set of words that the neuron model used at a particular time t over all the repeated presentations of the stimulus, and estimate P(Wlt), the time-dependent word probability distribution. At each time t we calculate the time-dependent entropy rate, and then take the average of these entropies Hnoise = (- LP(Wlt)lOg2 P(Wlt))t bits/word (5) w where ( .. . )t denotes the average over all times t. Hnoise is the noise entropy rate, which measures how much of the fine structure of the spike trains of the neuron is just noise. After performing the calculation for each of the inputs, using different word sizes 3, we estimate the limit of the total entropy and noise entropy rates at T --* 00, where the entropies converge to their real values (see [19] for details) . Figure 2A shows the total entropy rate of the responses to the set of stimuli, ranging from 10 to 170 bits/sec. The total entropy rate is correlated with the firing rates of the neuron (not shown). The noise entropy rate however, depends in a different way on the input parameters: Figure 2B shows the noise entropy rate of the responses to the set of stimuli, which may get up to 100 bits/sec. Specifically, for inputs with high mean current values and low fluctuation amplitude, many of the spikes are of ion channels which are open near the spike firing threshold is rather small [12). The fluctuations in this small number of open channels near firing threshold give rise to the input-dependent reliability of the spike timing. 3the bin size T = 2 msec has been set to be small enough to keep the fine temporal structure of the spike train within the word sizes used, yet large enough to avoid undersampling problems 182 E. Schneidman, /. Segev and N. Tishby just noise, even if the mean firing rate is high. The difference between the neuron's entropy rate (the total capacity of information of the neuron's spike train) and the noise entropy rate, is exactly the average rate of information that the neuron's spike trains encode about the input, I(stimulus, spike train) = Htotal Hnoise [20, 6], this is shown in Figure 2C. The information rate is more sensitive to the size of A .~: 200 B 200 ;f. ':'~ 150 150 100 100 50 50 10 10 0 0 a fl.LA/cm2) 0 '1 fl.LA/cm2) 0 0 a fl.LA/cm2) 0 ,. 100 D 3 :~ ;;; ~;) 80 a 3 2 .5 ~ ~ 60 .~ 2 2 ~ . )~ r . 1.5 40 ~ 1 20 i 0 10 10 10 0 .5 5 5 5 0 0 a lllAIcm2) 0 '1 [llAIcm2 ) 0 0 a lllAIcm2) Figure 2: Information capacity of the SHH model. (A) The total spike train entropy rate of the SHH model as a function of 'TI, the current input mean, and a, the standard deviation (see text for details). Error bar values of this surface as well as for the other frames range between 1 - 6% (not shown). (B) Noise entropy rate as a function of the current input parameters. (C) The information rate about the stimulus in the spike trains, as a function of the input parameters, calculated by subtracting noise entropy from the total entropy (note the change in grayscale in C and D). (D) Information per spike as a function of the input parameters, which is calculated by normalizing the results shown in C by the average firing rate of the responses to each of the inputs. fluctuations in the input than to the mean value of the current trace (as expected, from the reliability and precision of spike timing observed in vitro [3] and in vivo [6] as well as in simulations [12]). The dependence of the neural code on the input parameters is better reflected when calculating the average amount of information per spike that the model gives for each of the inputs (Fig. 2D) (see for comparison the values for the Fly's HI neuron [6]). 4 The effect of Changing the Neuron Parameters on the Information Capacity Increasing the density of ion channels in the membrane compared to the 'standard' SHH densities, while keeping the ratio between the K+ and Na+ channels fixed, only diminishes the amount of information that the neuron encodes about any of the inputs in the set. However, the change is rather small: Doubling the channel density decreases the amount of information by 5 - 25% (Fig. 3A), depending on the specific input. Decreasing the channel densities of both types, results in encoding more information about certain stimuli and less about others. Figure 3B shows that having half the channel densities would result with in 10% changes in the information in both directions. Thus, the information rates conveyed by the stochastic model are robust to changes in the ion channel density. Similar robustness (not shown) has been observed for changes in the membrane area (keeping channel Capacity and Robustness of Stochastic Neuron Models 183 density fixed) and in the temperature (which effects the channel kinetics). However, A jl.2 :5 1 ~ ..s0.8 10 c o 0 10 5 a[~em2) ....... , 1.2 B 1.1 .21.2 ~ 1 "1 '1 0.9 ~o .s 10 O.S 5 '1 [pAIem2) $A D 3 .5 3 2.5 2 1.5 0 .5 1.2 1.1 0 .9 10 5 O.S o 0 a[~Alem2) 0.4 >~ 0.3 0.2 0 .1 10 o 0 a[~Alem2) 0 Figure 3: The effect of changing the ion channel densities on the information capacity. (A) The ratio of the information rate of the SHH model with twice the density of the 'standard' SHH densities divided by the information rate of the mode with 'standard' SHH densities. (B) As in A, only for the SHH model with half the 'standard' densities. (C) The ratio of the info rate of the SHH model with twice as many Na+ channels, divided by the info rate of the standard SHH Na+ channel density, where the K+ channel density remains untouched (note the change in graycale in C and D). (D) As in C, only for the SHH model with the number of Na+ channels reduced by half. changing the density of the Na+ channels alone has a larger impact on the amount of information that the neuron conveys about the stimuli. Increasing Na+ channel density by a factor of two results in less information about most of the stimuli, and a gain in a few others (Fig. 3C). However, reducing the number of Na+ channels by half results in drastic loss of information for all of the inputs (Fig. 3D). 5 Discussion We have shown that the amount of information that the stochastic HH model encodes about its current input is highly correlated with the amplitude of fluctuations in the input and less so with the mean value of the input. The stochastic HH model, which incorporates ion channel noise, closely replicates the input-dependent reliability and precision of spike trains observed in cortical neurons. The information rates and information per spike are also similar to those of real neurons. As in other biological systems (e.g., [21]), we demonstrate robustness of macroscopic performance to changes in the cellular properties - the information coding rates of the SHH model are robust to changes in the ion channels densities as well as in the area of the excitable membrane patch and in the temperature (kinetics) of the channel dynamics. However, the information coding rates are rather sensitive to changes in the ratio between the densities of different ion channel types, suggests that the ratio between the density of the K+ channels and the Na+ channels in the 'standard' SHH model may be optimal in terms of the information capacity. This may have important implications on the nature of the neural code under adaptation and learning. We suggest that these notions of optimality and robustness may be a key biophysical principle of the operation of real neurons. Further investigations should take into account the activity-dependent nature of the channels and the 184 E. Schneidman, I. Segev and N. nshby neuron [15, 16] and the notion of local learning rules which could modify neuronal and suggest loca1learning rules as in [22]. Acknowledgements This research was supported by a grant from the Ministry of Science, Israel. References [1] Rieke F., Warland D., de Ruyter van Steveninck R., and Bialek W. Spike: Exploring the Neural Code. MIT Press, 1997. [2] Shadlen M. and Newsome W. Noise, neural codes and cortical organization. Curro Opin. Neurobiol., 4:569-579, 1994. [3] Mainen Z. and Sejnowski T. Reliability of spike timing in neocortical neurons. Science, 268:1503-1508, 1995. [4] Nowak L., Sanches-Vives M., and McCormick D. Influence of low and high frequency inputs on spike timing in visual cortical neurons. Cerebral Cortex, 7:487-501, 1997. [5] Bair W. and Koch C. Temporal precision of spike trains in extrastriate cortex of the behaving macaque monkey. Neural Comp., 8:1185-1202, 1996. [6] de Ruyter van Steveninck R., Lewen G., Strong S., Koberle R., and Bialek W. Reproducibility and variability in neural spike trains. Science, 275:1805-1808, 1997. [7] Reich D., Victor J., Knight B., Ozaki T., and Kaplan E. Response variability and timing precision of neuronal spike trains in vivo. J. Neurophysiol., 77:2836:2841, 1997. [8] Hodgkin A. and Huxley A. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol., 117:500-544, 1952. [9] Fitzhugh R. A kinetic model of the conductance changes in nerve membrane. J. Cell. Compo Physiol., 66:111-118, 1965. [10] DeFelice L. Introduction to Membrane Noise. Perseus Books, 1981. [11] Skaugen E. and Wall0e L. Firing behavior in a stochastic nerve membrane model based upon the Hodgkin-Huxley equations. Acta Physiol. Scand., 107:343-363, 1979. [12] Schneidman E., Freedman B., and Segev I. Ion channel stochasticity may be critical in determining the reliability and precision of spike timing. Neural Comp., 10:1679-1704, 1998. [13] White J., Klink R., Alonso A., and Kay A. Noise from voltage-gated channels may influence neuronal dynamics in the entorhinal cortex. J Neurophysiol, 80:262-9, 1998. [14] Hille B. Ionic Channels of Excitable Membrane. Sinauer Associates, 2nd ed., 1992. [15] Marder E., Abbott L., Turrigiano G., Liu Z., and Golowasch J. Memory from the dynamics of intrinsic membrane currents. Proc. Natl. Acad. Sci., 93:13481-6, 1996. [16] Toib A., Lyakhov V., and Marom S. Interaction between duration of activity and rate of recovery from slow inactivation in mammalian brain Na+ channels. J Neurosci., 18:1893-1903, 1998. [17] Strassberg A. and DeFelice L. Limits of the HH formalism: Effects of single channel kinetics on transmembrane voltage dynamics. Neural Comp., 5:843-856, 1993. [18] Softky W. and Koch C. The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs. J. Neurosci., 13:334-350, 1993. [19] Strong S., Koberle R., de Ruyter van Steveninck R., and Bialek W. Entropy and information in neural spike trains. Phys. Rev. Lett., 80:197-200, 1998. [20] Cover T.M. and Thomas J.A. Elements of Information Theory. Wiley, 1991. [21] Barkai N. and Leibler S. Robustness in simple biochemical networks. Nature, 387:913917, 1997. [22] Stemmler M. and Koch C. How voltage-dependent conductances can adapt to maximize the information encoded by neuronal firing rate. Nat. Neurosci., 2:521-7, 1999.
|
1999
|
98
|
1,751
|
Bayesian Transduction Thore Graepel, Ralf Herbrich and Klaus Obermayer Department of Computer Science Technical University of Berlin Franklinstr. 28/29, 10587 Berlin, Germany {graepeI2, raith, oby} @cs.tu-berlin.de Abstract Transduction is an inference principle that takes a training sample and aims at estimating the values of a function at given points contained in the so-called working sample as opposed to the whole of input space for induction. Transduction provides a confidence measure on single predictions rather than classifiers a feature particularly important for risk-sensitive applications. The possibly infinite number of functions is reduced to a finite number of equivalence classes on the working sample. A rigorous Bayesian analysis reveals that for standard classification loss we cannot benefit from considering more than one test point at a time. The probability of the label of a given test point is determined as the posterior measure of the corresponding subset of hypothesis space. We consider the PAC setting of binary classification by linear discriminant functions (perceptrons) in kernel space such that the probability of labels is determined by the volume ratio in version space. We suggest to sample this region by an ergodic billiard. Experimental results on real world data indicate that Bayesian Transduction compares favourably to the well-known Support Vector Machine, in particular if the posterior probability of labellings is used as a confidence measure to exclude test points of low confidence. 1 Introduction According to Vapnik [9], when solving a given problem one should avoid solving a more general problem as an intermediate step. The reasoning behind this principle is that in order to solve the more general task resources may be wasted or compromises may have to be made which would not have been necessary for the solution of the problem at hand. A direct application of this common-sense principle reduces the more general problem of inferring a functional dependency on the whole of input space to the problem of estimating the values of a function at given points (working sample), a paradigm referred to as transductive inference. More formally, given a probability measure PXY on the space of data X x y = X x {-I, + 1 }, a training sample S = {(Xl, yd, ... ,(Xl, Yi)} is generated i.i.d. according to PXY. Additional m data points W = {Xl+I, ... ,Xi+m } are drawn: the working sample. The goal is to label the objects of the working sample W using a fixed set 1{ of functions Bayesian Transduction 457 f : X Jo--ot {-I, + I} so as to minimise a predefined loss. In. contrast, inductive inference, aims at choosing a single function It E 1i best suited to capture the dependency expressed by the unknown PXY . Obviously, if we have a transductive algorithm A (W, S, 1i) that assigns to each working sample W a set of labels given the training sample S and the set 1i offunctions, we can define a function fs : X Jo--ot {-I, +1} by fs (x) = A ({x} ,S, 1i) as a result ofthe transduction algorithm. There are two crucial differences to induction, however: i) A ({ x} , S, 1i) is not restricted to select a single decision function f E 1i for each x, ii) a transduction algorithm can give performance guarantees on particular labellings instead of functions. In practical applications this difference may be of great importance. After all, in risk sensitive applications (medical diagnosis, financial and critical control applications) it often matters to know how confident we are about a given prediction. In this case a general confidence measure of the classifier w.r. t. the whole input distribution would not provide the desired warranty at all. Note that for linear classifiers some guarantee can be obtained by the margin [7] which in Section 4 we will demonstrate to be too coarse a confidence measure. The idea of transduction was put forward in [8], where also first algorithmic ideas can be found. Later [1] suggested an algorithm for transduction based on linear programming and [3] highlighted the need for confidence measures in transduction. The paper is structured as follows: A Bayesian approach to transduction is formulated in Section 2. In Section 3 the function class of kernel perceptrons is introduced to which the Bayesian transduction scheme is applied. For the estimation of volumes in parameter space we present a kernel billiard as an efficient sampling technique. Finally, we demonstrate experimentally in Section 4 how the confidence measure for labellings helps Bayesian Transduction to achieve low generalisation error at a low rejection rate of test points and thus to outperform Support Vector Machines (SVMs). 2 Bayesian Transductive Classification Suppose we are given a training sample S = {(Xl, YI) , . .. , (Xl, Yl)} drawn i.i.d. from PXY and a working sample W = {XHI,' " , XHm} drawn i.i.d. from Px . Given a prior PH over the set 1i of functions and a likelihood P (Xy)lIH=f we obtain a posterior probability PHI(Xy)l=s ~f PHIS by Bayes' rule. This posterior measure induces a probability measure on labellings b E {-I, +l}m of the working sample byl (1) For the sake of simplicity let us assume a PAC style setting, i.e. there exists a function r in the space 1i such that PYlx=x (y) = 6 (y - r (x)). In this case one can define the so-called version-space as the set of functions that is consistent with the training sample (2) outside which the posterior PHIS vanishes. Then Pymls,w (b) represents the prior measure of functions consistent with the training sample S and the labelling b on the working sample W normalised by the prior measure of functions consistent with S alone. The measure PH can be used to incorporate prior knowledge into 1 Note that the number of different labellings b implement able by 1l is bounded above by the value of the growth function IIu (JWI) [8, p. 321]. 458 T. Graepe/, R. Herbrich and K. Obennayer the inference process. If no such knowledge is available, considerations of symmetry may lead to "uninformative" priors. Given the measure PYFnIS,W over labellings, in order to arrive at a risk minimal decision w.r.t. the labelling we need to define a loss function I : ym X ym I---t IR+ between labellings and minimise its expectation, R (b, S, W) = EYFnIS,W [I (b, ym)] = 2: I (b, b/) PYFnIS,W (b/) , (3) {b'} where the summation runs over all the 2m possible labellings b' of the working sample. Let us consider two scenarios: 1. A 0-1-loss on the exact labelling b, i.e. for two labellings band b' m Ie (b, b/) = 1-II 6 (bi - bD ¢} Re (b, S, W) = 1 PYFnIS,W (b) . (4) i=l In this case choosing the labelling be = argminb Re (b, S, W) of the highest joint probability Pymls,w (b) minimises the risk. This non-labelwise loss is appropriate if the goal is to exactly identify a combination of labels, e.g. the combination of handwritten digits defining a postal zip code. Note that classical SVM transduction (see, e.g. [8, 1]) by maximising the margin on the combined training and working sample approximates this strategy and hence does not minimise the standard classification risk on single instances as intended. 2. A 0-1-10ss on the single labels bi, i.e. for two labellings band b' 1 m 1$ (b, b/) = - 2: (1- 6 (bi - bD) , (5) m i=l R$ (b, S, W) ! f 2: (1- 6 (bi b~)) Pymls,w (b/) i=l {b'} 1 m - 2: (1- PHIs ({f: f(Xl+i) = bd)) . m i=l Due to the independent treatment of the loss at working sample points the risk R$ (b, S, W) is minimised by the labelling of highest marginal probability of the labels, i.e. bi = argmaXyEY PHIs ({f: f(Xl+i) = y}). Thus in the case of the labelwise loss (5) a working sample of m > 1 point does not offer any advantages over larger working samples w.r. t. the Bayes-optimal decision. Since this corresponds to the standard classification setting, we will restrict ourselves to working samples of size m = 1, i.e. to one working point Xl+1. 3 Bayesian Transduction by Volume 3.1 The Kernel Perceptron We consider transductive inference for the class of kernel perceptrons. The decision functions are given by f (x) = sign «w, q, (xl) >') = sign (t a;k (x;, X)) l w = 2: (¥itP (xd E :F , i=l Bayesian Transduction 459 Figure 1: Schematic view of data space (left) and parameter space ( right) for a classification toy example. Using the duality given by (w, 4> (x)):F = 0 data points on the left correspond to hyperplanes on the right, while hyperplanes on the left can be thought of as points on the right. where the mapping 4> : X t--+ :F maps from input space X to a feature space :F completely determined by the inner product function (kernel) k : X x X t--+ IR (see [9, 10]). Given a training sample S = {(Xi , Yi)}~=l we can define the version space the set of all perceptrons compatible with the training data as in (2) having the additional constraint Ilwll:F = 1 ensuring uniqueness. In order to obtain a prediction on the label b1 of the working point Xl+l we note that Xl+l may bisects the volume V of version space into two sub-volumes V+ and V-, where the perceptrons in V+ would classify Xl+l as b1 = +1 and those in V- as b1 = -l. The ratio p+ = V+ IV is the probability of the labelling b1 = +1 given a uniform prior PH over wand the class of kernel perceptrons, accordingly for b1 = -1 (see Figure 1). Already Vapnik in [8, p. 323] noticed that it is troublesome to estimate sub- volumes of version space. As the solution to this problem we suggest to use a billiard algorithm. 3.2 Kernel Billiard for Volume Estimation The method of playing billiard in version space was first introduced by Rujan [6] for the purpose of estimating its centre of mass and consequently refined and extended to kernel spaces by [4]. For Bayesian Transduction the idea is to bounce the billiard ball in version space and to record how much time it spends in each of the sub-volumes of interest. Under the assumption of ergodicity [2] w.r.t. the uniform measure in the limit the accumulated flight times for each sub-volume are proportional to the sub-volume itself. Since the trajectory is located in :F each position wand direction v of the ball can be expressed as linear combinations of the 4> (xd , i.e. l W = L Q:i4> (Xi) i=l l v = L ,Bi4> (Xi) i=l l (w, v):F = L Q:i,Bjk (Xi, Xj) i,j=l where 0:, {3 are real vectors with f components and fully determine the state of the billiard. The algorithm for the determination of the label b1 of Xl+l proceeds as follows: 1. Initialise the starting position Wo in V (S) using any kernel perceptron algorithm that achieves zero training error (e.g. SVM [9]) . Set V+ = V- = O. 460 T. Graepel, R. Herbrich and K. Obennayer 2. Find the closest boundary of V (S) starting from current w into direction v, where the flight times Tj for all points including Xl+1 are determined using (w,tP(Xj»:r (v,tP(Xj)):r . The smallest positive flight time Tc = minj:T;>o Tj in kernel space corresponds to the closest data point boundary tP (xc) on the hypersphere. Note, that if Tc -7 00 we randomly generate a direction v pointing towards version space, i.e. y (v, tP (x)):r > 0 assuming the last bounce was at tP (x). 3. Calculate the ball's new position w' according to , w + TcV W = . Ilw + Tcvll:r Calculate the distance tf = Ilw - w'llsphere = arccos (1 - Ilw - w'lI;' /2) on the hypersphere and add it to the volume estimate VY corresponding to the current label y = sign (w + w', tP (Xl+d):r)· If the test point tP (xl+d was hit, i.e. c = l + 1, keep the old direction vector v. Otherwise update to the reflection direction v', v' = v - 2 (v, tP (xc) ):r tP (xc) . Go back to step 2 unless the stopping criterion (8) is met. Note that in practice one trajectory can be calculated in advance and can be used for all test points. The estimators of the probability of the labellings are then given by p+ = V+ /(V+ + V-) and p = V- /(v+ + V-). Thus, the algorithm outputs b1 with confidence Ctrans according to def argmaXyEY iY' , ~ Ctrans def (2 . max (pi" , p) - 1) E [0, 1] . (6) (7) Note that the Bayes Point Machine (BPM) [4] aims at an optimal approximation of the transductive classification (6) by a single function f E 1{ and that the well known SVM can be viewed as an approximation of the BPM by the centre of the largest ball in version space. Thus, treating the real valued output If(xl+1) I ~f G;nd of SVM classifiers as a confidence measure can be considered an approximation of (7). The consequences will be demonstrated experimentally in the following section. Disregarding the issue of mixing time [2] and the dependence of trajectories we assume for the stopping criterion that the fraction pt of time tt spent in volume V+ on trajectory i of length (tt + f;) is a random variable having expectation p+ . Hoeffding's inequality [5] bounds the probability of deviation from the expectation p+ by more than f, P (!; t p! - p+ <: ,) ~ exp (-2n,2) ~ ~. (8) Thus if we want the deviation f from the true label probability to be less than f < 0.05 with probability at least 1 T} = 0.99 we need approximately n R:j 1000 bounces. The computational effort of the above algorithm for a working set of size m is of order 0 (nl (m + l)). Bayesian Transduction 1 = 100--1 ~~----~----~--~----~--~ 0.00 0.05 0.10 0 lei 020 rejeclion rate (a) 2 o . o . 461 o~ __ ~ __ ~ __ ~~ __ ~ __ ~~ 000 0.05 0.10 0 lei 020 0.25 030 rejection rate (b) Figure 2: Generalisation error vs. rejection rate for Bayesian Transduction and SVMs for the thyroid data set (0' = 3) (a) and the heart data set (0' = 10). The error bars in both directions indicate one standard deviation of the estimated means. The upper curve depicts the result for the SVM algorithm; the lower curve is the result obtained by Bayesian Transduction. 4 Experimental Results We focused on the confidence Ctrans Bayesian Transduction provides together with the prediction b1 of the label. If the confidence Ctrans reflects reliability of a label estimate at a given test point then rejecting those test points whose predictions carry low confidence should lead to a reduction in generalisation error on the remaining test points. In the experiments we varied a rejection threshold () between [0, 1] thus obtaining for each () a rejeection rate together with an estimate of the generalisation error at non-rejected points. Both these curves were linked by their common ()-axis resulting in a generalisation error versus rejection rate plot. We used the UCI2 data sets thyroid and heart because they are medical applications for which the confidence of single predictions is particularly important. Also a high rejection rate due to too conservative a confidence measure may incur considerable costs. We trained a Support Vector Machine using RBF kernels k (x, x') = exp ( -llx - x'1l2 /20'2) with 0' chosen such as to insure the existence of a version space. We used 100 different training samples obtained by random 60%:40% splits of the whole data set. The margin Clnd of each test point was calculated as a confidence measure of SVM classifications. For comparison we determined the labels b1 and resulting confidences Ctrans using the Bayesian Transduction algorithm (see Section 3) with the same value of the kernel parameter. Since the rejection for the Bayesian Transduction was in both cases higher than for SVMs at the same level () we determined ()max which achieves the same rejection rate for the SVM confidence measures as Bayesian Transduction achieves at () = 1 (thyroid: ()max = 2.15, heart: ()max = 1.54). The results for the two data sets are depicted in Figure 2. In the thyroid example Figure 2 (a) one can see that Ctrans is indeed an appropriate indicator of confidence: at a rejection rate of approximately 20% the generalisation error approaches zero at minimal variance. For any desired generalisation error Bayesian Transduction needs to reject significantly less examples of the test set as compared to SVM classifiers, e.g. 4% less at 2.3% generalisation error. The results of the heart data set show even more pronounced characteristics w.r.t. to the rejection 2UCI University of California at Irvine: Machine Learning Repository 462 T. Graepe/, R. Herbrich and K. Obermayer rate. Note that those confidence measures considered cannot capture the effects of noise in the data which leads to a generalisation error of 16.4% even at maximal rejection () = 1 corresponding to the Bayes error under the given function class. 5 Conclusions and FUture Work In this paper we a presented a Bayesian analysis of transduction. The required volume estimates for kernel perceptrons in version space are performed by an ergodic billiard in kernel space. Most importantly, transduction not only determines the label of a given point but also returns a confidence measure of the classification in the form of the probability of the label under the model. Using this confidence measure to reject test examples then lead to improved generalisation error over SVMs. The billiard algorithm can be extended to the case of non-zero training error by allowing the ball to penetrate walls, a property that is captured by adding a constant>. to the diagonal of the kernel matrix [4]. Further research will aim at the discovery of PAC-Bayesian bounds on the generalisation error of transduction. Acknowledgements We are greatly indebted to U. Kockelkorn for many interesting suggestions and discussions. This project was partially funded by Technical University of Berlin via FIP 13/41. References [1] K. Bennett. Advances in Kernel Methods Support Vector Learning, chapter 19, Combining Support Vector and Mathematical Programming Methods for Classification, pages 307-326. MIT Press, 1998. [2] I. Cornfeld, S. Fomin, and Y. Sinai. Ergodic Theory. Springer Verlag, 1982. [3] A. Gammerman, V. Vovk, and V. Vapnik. Learning by transduction. In Proceedings of Uncertainty in AI, pages 148-155, Madison, Wisconsin, 1998. [4] R. Herbrich, T. Graepel, and C. Campbell. Bayesian learning in reproducing kernel Hilbert spaces. Technical report, Technical University Berlin, 1999. TR 99-1l. [5] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58:13-30, 1963. [6] P. Rujan. Playing billiard in version space. Neural Computation, 9:99-122, 1997. [7] J. Shawe-Taylor. Confidence estimates of classification accuracy on new examples. Technical report, Royal Holloway, University of London, 1996. NC2-TR-1996-054. [8] V. Vapnik. Estimation of Dependences Based on Empirical Data. Springer, 1982. [9] V. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995. [10] G. Wahba. Spline Models for Observational Data. Society for Industrial and Applied Mathematics, Philadelphia, 1990.
|
1999
|
99
|
1,752
|
Reinforcement Learning with Function Approximation Converges to a Region Geoffrey J. Gordon ggordon@es.emu.edu Abstract Many algorithms for approximate reinforcement learning are not known to converge. In fact, there are counterexamples showing that the adjustable weights in some algorithms may oscillate within a region rather than converging to a point. This paper shows that, for two popular algorithms, such oscillation is the worst that can happen: the weights cannot diverge, but instead must converge to a bounded region. The algorithms are SARSA(O) and V(O); the latter algorithm was used in the well-known TD-Gammon program. 1 Introduction Although there are convergent online algorithms (such as TD()') [1]) for learning the parameters of a linear approximation to the value function of a Markov process, no way is known to extend these convergence proofs to the task of online approximation of either the state-value (V*) or the action-value (Q*) function of a general Markov decision process. In fact, there are known counterexamples to many proposed algorithms. For example, fitted value iteration can diverge even for Markov processes [2]; Q-Iearning with linear function approximators can diverge, even when the states are updated according to a fixed update policy [3]; and SARSA(O) can oscillate between multiple policies with different value functions [4]. Given the similarities between SARSA(O) and Q-Iearning, and between V(O) and value iteration, one might suppose that their convergence properties would be identical. That is not the case: while Q-Iearning can diverge for some exploration strategies, this paper proves that the iterates for trajectory-based SARSA(O) converge with probability 1 to a fixed region. Similarly, while value iteration can diverge for some exploration strategies, this paper proves that the iterates for trajectory-based V(O) converge with probability 1 to a fixed region. l The question ofthe convergence behavior of SARSA()') is one of the four open theoretical questions of reinforcement learning that Sutton [5] identifies as "particularly important, pressing, or opportune." This paper covers SARSA(O), and together lIn a ''trajectory-based'' algorithm, the exploration policy may not change within a single episode of learning. The policy may change between episodes, and the value function may change within a single episode. (Episodes end when the agent enters a terminal state. This paper considers only episodic tasks, but since any discounted task can be transformed into an equivalent episodic task, the algorithms apply to non-episodic tasks as well.) with an earlier paper [4] describes its convergence behavior: it is stable in the sense that there exist bounded regions which with probability 1 it eventually enters and never leaves, but for some Markov decision processes it may not converge to a single point. The proofs extend easily to SARSA(,\) for ,\ > O. Unfortunately the bound given here is not of much use as a practical guarantee: it is loose enough that it provides little reason to believe that SARSA(O) and V(O) produce useful approximations to the state- and action-value functions. However, it is important for several reasons. First, it is the best result available for these two algorithms. Second, such a bound is often the first step towards proving stronger results. Finally, in practice it often happens that after some initial exploration period, only a few different policies are ever greedy; if this is the case, the strategy of this paper could be used to prove much tighter bounds. Results similar to the ones presented here were developed independently in [6]. 2 The algorithms The SARSA(O) algorithm was first suggested in [7]. The V(O) algorithm was popularized by its use in the TD-Gammon backgammon playing program [8]. 2 Fix a Markov decision process M, with a finite set 8 of states, a finite set A of actions, a terminal state T, an initial distribution 80 over 8, a one-step reward function r : 8 x A -+ R, and a transition function 8 : 8 x A -+ 8 U {T}. (M may also have a discount factor 'Y specifying how to trade future rewards against present ones. Here we fix 'Y = 1, but our results carry through to 'Y < 1.) Both the transition and reward functions may be stochastic, so long as successive samples are independent (the Markov property) and the reward has bounded expectation and variance. We assume that all states in 8 are reachable with positive probability. We define a policy 7r to be a function mapping states to probability distributions over actions. Given a policy we can sample a trajectory (a sequence of states, actions, and one-step rewards) by the following rule: begin by selecting a state So according to 80 . Now choose an action ao according to 7r(so), Now choose a onestep reward ro according to r(so, ao). Finally choose a new state Sl according to 8(so, ao). If Sl = T, stop; otherwise repeat. We assume that all policies are proper, that is, that the agent reaches T with probability 1 no matter what policy it follows. (This assumption is satisfied trivially if'Y < 1.) The reward for a trajectory is the sum of all of its one-step rewards. Our goal is to find an optimal policy, that is, a policy which on average generates trajectories with the highest possible reward. Define Q*(s, a) to be the best total expected reward that we can achieve by starting in state s, performing action a, and acting optimally afterwards. Define V*(s) = maxaQ*(s, a). Knowledge of either Q* or the combination of V*, 8, and r is enough to determine an optimal policy. The SARSA(O) algorithm maintains an approximation to Q*. We will write Q(s,a) for s E 8 and a E A to refer to this approximation. We will assume that Q is a full-rank linear function of some parameters w. For convenience of notation, we will write Q(T, a) = 0 for all a E A, and tack an arbitrary action onto the end of all trajectories (which would otherwise end with the terminal state). After seeing 2The proof given here does not cover the TD-Gammon program, since TD-Gammon uses a nonlinear function approximator to represent its value function. Interestingly, though, the proof extends easily to cover games such as backgammon in addition to MDPs. It also extends to cover SARSA('x) and V(,x) for ,X > O. a trajectory fragment s, a, r, s', a', the SARSA(O) algorithm updates Q(s, a) +- r + Q(s', a') The notation Q(s, a) +- V means that the parameters, w, which represent Q(s, a) should be adjusted by gradient descent to reduce the error (Q(s, a) - V)2; that is, for some preselected learning rate 0: ~ 0, 8 Wnew = Wold + 0:(V - Q(s, a)) 8w Q(s, a) For convenience, we assume that 0: remains constant within a single trajectory. We also make the standard assumption that the sequence of learning rates is fixed before the start of learning and satisfies Et O:t = 00 and Et o:~ < 00. We will consider only the trajectory-based version of SARSA(O). This version changes policies only between trajectories. At the beginning of each trajectory, it selects the E-greedy policy for its current Q function. From state s, the E-greedy policy chooses the action argmaxa Q(s, a) with probability 1 E, and otherwise selects uniformly at random among all actions. This rule ensures that, no matter the sequence of learned Q functions, each state-action pair will be visited infinitely often. (The use of E-greedy policies is not essential. We just need to be able to find a region that contains all of the approximate value functions for every policy considered, and a bound on the convergence rate of TD(O).) We can compare the SARSA(O) update rule to the one for Q-Iearning: Q(s, a) +- r + maxQ(s, b) b Often a' in the SARSA(O) update rule will be the same as the maximizing b in the Q-Iearning update rule; the difference only appears when the agent takes an exploring action, i.e., one which is not greedy for the current Q function. The V(O) algorithm maintains an approximation to V* which we will write V(s) for all s E S. Again, we will assume V is a full-rank linear function of parameters w, and V(T) is held fixed at O. After seeing a trajectory fragment s,a,r,s', V(O) sets V(s) +- r + V(s') This update ignores a. Often a is chosen according to a greedy or E-greedy policy for a recent V. However, for our analysis we only need to assume that we consider finitely many policies and that the policy remains fixed during each trajectory. We leave open the question of whether updates to w happen immediately after each transition or only at the end of each trajectory. As pointed out in [9], this difference will not affect convergence: the updates within a single trajectory are 0(0:), so they cause a change in Q(s,a) or V(s) of 0(0:), which means subsequent updates are affected by at most 0(0:2 ). Since 0: is decaying to zero, the 0(0:2 ) terms can be neglected. (If we were to change policies during the trajectory, this argument would no longer hold, since small changes in Q or V can cause large changes in the policy.) 3 The result Our result is that the weights w in either SARSA(O) or V(O) converge with probability 1 to a fixed region. The proof of the result is based on the following intuition: while SARSA(O) and V(O) might consider many different policies over time, on any given trajectory they always follow the TD(O) update rule for some policy. The TD(O) update is, under general conditions, a 2-norm contraction, and so would converge to its fixed point if it were applied repeatedly; what causes SARSA(O) and V(O) not to converge to a point is just that they consider different policies (and so take steps towards different fixed points) during different trajectories. Crucially, under general conditions, all of these fixed points are within some bounded region. So, we can view the SARSA(O) and V(O) update rules as contraction mappings plus a bounded amount of "slop." With this observation, standard convergence theorems show that the weight vectors generated by SARSA(O) and V(O) cannot diverge. Theorem 1 For any Markov decision process M satisfying our assumptions, there is a bounded region R such that the SARSA(O) algorithm, when acting on M, produces a series of weight vectors which with probability 1 converges to R. Similarly, there is another bounded region R' such that the V(O) algorithm acting on M produces a series of weight vectors converging with probability 1 to R' . PROOF: Lemma 2, below, shows that both the SARSA(O) and V(O) updates can be written in the form Wt+1 = Wt - at (Atwt - rt + Et) where At is positive definite, at is the current learning rate, E(Et) = 0, Var(Et) ::::: K(l + IlwtI12), and At and rt depend only on the currently greedy policy. (At and rt represent, in a manner described in the lemma, the transition probabilities and one-step costs which result from following the current policy. Of course, Wt, At, and rt will be different depending on whether we are following SARSA(O) or V(O).) Since At is positive definite, the SARSA(O) and V(O) updates are 2-norm contractions for small enough at. So, if we kept the policy fixed rather than changing it at the beginning of each trajectory, standard results such as Lemma 1 below would guarantee convergence. The intuition is that we can define a nonnegative potential function J(w) and show that, on average, the updates tend to decrease J(w) as long as at is small enough and J (w) starts out large enough compared to at. To apply Lemma 1 under the assumption that we keep the policy constant rather than changing it every trajectory, write At = A and rt = r for all t, and write w" = A -1 r. Let p be the smallest eigenvalue of A (which must be real and positive since A is positive definite). Write St = AWt - r + Et for the update direction at step t. Then if we take J(w) = Ilw - w,,112, E(V J(Wt)T stlwt) = 2(wt - w,,)T(Awt - r + E(Et)) = 2(wt - w,,)T(Awt - Aw,,) > 2pllwt - w,,112 = 2pJ(wt) so that -St is a descent direction in the sense required by the lemma. It is easy to check the lemma's variance condition. So, Lemma 1 shows that J(Wt) converges with probability 1 to 0, which means Wt must converge with probability 1 to W". If we pick an arbitrary vector u and define H(w) = max(O, Ilw - ull - C)2 for a sufficiently large constant C, then the same argument reaches the weaker conclusion that Wt must converge with probability 1 to a sphere of radius C centered at u. To see why, note that -St is also a descent direction for H(w): inside the sphere, H = 0 and V H = 0, so the descent condition is satisfied trivially. Outside the sphere, VH(w) V H(Wt)T E(stlwt) Ilw-ull-C = 2(w - u) = d(w)(w - u) Ilw-ull d(wt)(wt - u)TE(stlwt) = d(wt)(wt - w" + w" - U)T A(wt - w,,) ~ d(wt)(pllwt - w,,112 -llw" - ullllAllllwt - w"ID The positive term will be larger than the negative one if Ilwt - w" II is large enough. So, if we choose C large enough, the descent condition will be satisfied. The variance condition is again easy to check. Lemma 3 shows that \7 H is Lipschitz. So, Lemma 1 shows that H(wt) converges with probability 1 to 0, which means that Wt must converge with probability 1 to the sphere of radius C centered at u. But now we are done: since there are finitely many policies that SARSA(O) or V(O) can consider, we can pick any u and then choose a C large enough that the above argument holds for all policies simultaneously. With this choice of C the update for any policy decreases H(wt) on average as long as at is small enough, so the update for SARSA(O) or V(O) does too, and Lemma 1 applies. 0 The following lemma is Corollary 1 of [10]. In the statement of the lemma, a Lipschitz continuous function F is one for which there exists a constant L so that IIF(u) - F(w)11 ::; Lllu - wll for all u and w. The Lipschitz condition is essentially a uniform bound on the derivative of F. Lemma 1 Let J be a differentiable function, bounded below by J*, and let \7 J be Lipschitz continuous. Suppose the sequence Wt satisfies for random vectors St independent of Wt-;P, Wt+2, . . .. Suppose - St is a descent direction for J in the sense that E(stlwt) \7 J(Wt) > 6(E) > 0 whenever J(Wt) > J* + Eo Suppose also that E(llstI12Iwt) ::; Kd(wt) + K2E(stlwt)T\7J(Wt) + K3 and finally that the constants at satisfy at > 0 L: at = 00 t Then J(Wt) -+ J* with probability 1. Most of the work in proving the next lemma is already present in [1]. The transformation from an MDP under a fixed policy to a Markov chain is standard. Lemma 2 The update made by SARSA(O) or V(O) during a single trajectory can be written in the form where the constant matrix A" and constant vector r" depend on the currently greedy policy 7f, a is the current learning rate, and E(E) = O. Furthermore, A" is positive definite, and there is a constant K such that Var(E) ::; K(l + IlwI12). PROOF: Consider the following Markov process M,,: M" has one state for each state-action pair in M. If M has a transition which goes from state S under action a with reward r to state s' with probability p, then M" has a transition from state (s,a) with reward r to state (s',a') for every a'; the probability of this transition is p7r(a'ls'). We will represent the value function for M" in the same way that we represented the Q function for M; in other words, the representation for V ( (s, a}) is the same as the representation for Q(s, a). With these definitions, it is easy to see that TD(O) acting on M" produces exactly the same sequence of parameter changes as SARSA(O) acting on M under the fixed policy 1r. (And since 7r(als) > 0, every state of M", will be visited infinitely often.) Write T", for the transition probability matrix of the above Markov process. That is, the entry of T", in row (s, a) and column (s', a') will be equal to the probability of taking a step to (s', a') given that we start in (s, a). By definition, T", is substochastic. That is, it has nonnegative entries, and its row sums are less than or equal to l. Write s for the vector whose (s, a)th element is So(s)7r(als), that is, the probability that we start in state s and take action a. Write d1f = (I - T;f')-ls, where I is the identity matrix. As demonstrated in, e.g., [11], d", is the vector of expected visitation frequencies under 7rj that is, the element of d", corresponding to state sand action a is the expected number of times that the agent will visit state s and select action a during a single trajectory following policy 7r. Write D1f for the diagonal matrix with d1f on its diagonaL Write r for the vector of expected rewardsj that is, the component of r corresponding to state s and action a is E(r(s, a)). Finally write X for the Jacobian matrix ~. With this notation, Sutton [1] showed that the expected TD(O) update is E(wnewlwold) = Wold - aXT D",(I - T",)XWold + aXT D",r (Actually, he only considered the case where all rewards are zero except on transitions from nonterminal to terminal states, but his argument works equally well for the more general case where nonzero rewards are allowed everywhere.) So, we can take A", = X T D",(I - T",)X and r", = X T D",r to make E(f) = O. Furthermore, Sutton showed that, as long as the agent reaches the terminal state with probability 1 (in other words, as long as 7r is proper) and as long as every state is visited with positive probability (which is true since all states are reachable and 7r has a nonzero probability of choosing every action), the matrix D 1f(I - T",) is strictly positive definite. Therefore, so is A",. Finally, as can be seen from Sutton's equations on p. 25, there are two sources of variance in the update direction: variation in the number of times each transition is visited, and variation in the one-step rewards. The visitation frequencies and the one-step rewards both have bounded variance, and are independent of one another. They enter into the overall update in two ways: there is one set of terms which is bilinear in the one-step rewards and the visitation frequencies, and there is another set of terms which is bilinear in the visitation frequencies and the weights w. The former set of terms has constant variance. Because the policy is fixed, W is independent of the visitation frequencies, and so the latter set of terms has variance proportional to Ilw112. So, there is a constant K such that the total variance in f can be bounded by K(1 + IlwI12). A similar but simpler argument applies to V(O). In this case we define M", to have the same states as M, and to have the transition matrix T", whose element s, s' is the probability of landing in s' in M on step t + 1, given that we start in s at step t and follow 7r. Write s for the vector of starting probabilities, that is, Sx = So(x). Now define X = ~~ and d", = (I - TJ)-l s. Since we have assumed that all policies are proper and that every policy considered has a positive probability of reaching any state, the update matrix A", = XT D", (I - T",)X is strictly positive definite. 0 Lemma 3 The gradient of the function H(w) = max(O, Ilwll - 1)2 is Lipschitz continuous. PROOF: Inside the unit sphere, H and all of its derivatives are uniformly zero. Outside, we have 'VH = wd(w) where d(w) = II~Rlll, and '\72 H = d(w)I + '\7d(w)wT wIT = d(w)I + IIwl12 Ilwllw wwT = d(w)I + IIwl12 (1 - d(w)) The norm of the first term is d( w), the norm of the second is 1 d~ w), and since one of the terms is a multiple of I the norms add. So, the norm of '\7 H is 0 inside the unit sphere and 1 outside. At the boundary of the unit sphere, '\7 H is continuous, and its directional derivatives from every direction are bounded by the argument above. So, '\7 H is Lipschitz continuous. 0 Acknowledgements Thanks to Andrew Moore and to the anonymous reviewers for helpful comments. This work was supported in part by DARPA contract number F30602- 97- 1- 0215, and in part by NSF KDI award number DMS- 9873442. The opinions and conclusions are the author's and do not reflect those of the US government or its agencies. References [1] R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3(1):9-44, 1988. [2] Geoffrey J. Gordon. Stable function approximation in dynamic programming. Technical Report CMU-CS-95-103, Carnegie Mellon University, 1995. [3] L. C. Baird. Residual algorithms: Reinforcement learning with function approximation. In Machine Learning: proceedings of the twelfth international conference, San Francisco, CA, 1995. Morgan Kaufmann. [4] Geoffrey J. Gordon. Chattering in SARSA(A). Internal report, 1996. CMU Learning Lab. Available from www.es . emu. edu;-ggordon. [5] R. S. Sutton. Open theoretical questions in reinforcement learning. In P. Fischer and H. U. Simon, editors, Computational Learning Theory (Proceedings of EuroCOLT'99), pages 11- 17, 1999. [6] D. P. de Farias and B. Van Roy. On the existence of fixed points for approximate value iteration and temporal-difference learning. Journal of Optimization Theory and Applications, 105(3), 2000. [7] Gavin A. Rummery and Mahesan Niranjan. On-line Q-Iearning using connectionist systems. Technical Report 166, Cambridge University Engineering Department, 1994. [8] G. Tesauro. TD-Gammon, a self-teaching backgammon program, achieves master-level play. Neural Computation, 6:215- 219, 1994. [9] T. Jaakkola, M. 1. Jordan, and S. P. Singh. On the convergence of stochastic iterative dynamic programming algorithms. Neural Computation, 6:1185- 1201, 1994. [10] B. T. Polyak and Ya. Z. Tsypkin. Pseudogradient adaptation and training algorithms. Automation and Remote Control, 34(3):377- 397, 1973. 'Translated from A vtomatika i Telemekhanika. [11] J. G. Kemeny and J. L. SnelL Finite Markov Chains. Van Nostrand- Reinhold, New York, 1960.
|
2000
|
1
|
1,753
|
Occam·s Razor Carl Edward Rasmussen Department of Mathematical Modelling Technical University of Denmark Building 321, DK-2800 Kongens Lyngby, Denmark carl@imm . dtu . dk http : //bayes . imm . dtu . dk Zoubin Ghahramani Gatsby Computational Neuroscience Unit University College London 17 Queen Square, London WCIN 3AR, England zoubin@gatsby . ucl . ac . uk http : //www . gatsby . ucl .ac . uk Abstract The Bayesian paradigm apparently only sometimes gives rise to Occam's Razor; at other times very large models perform well. We give simple examples of both kinds of behaviour. The two views are reconciled when measuring complexity of functions, rather than of the machinery used to implement them. We analyze the complexity of functions for some linear in the parameter models that are equivalent to Gaussian Processes, and always find Occam's Razor at work. 1 Introduction Occam's Razor is a well known principle of "parsimony of explanations" which is influential in scientific thinking in general and in problems of statistical inference in particular. In this paper we review its consequences for Bayesian statistical models, where its behaviour can be easily demonstrated and quantified. One might think that one has to build a prior over models which explicitly favours simpler models. But as we will see, Occam's Razor is in fact embodied in the application of Bayesian theory. This idea is known as an "automatic Occam's Razor" [Smith & Spiegelhalter, 1980; MacKay, 1992; Jefferys & Berger, 1992]. We focus on complex models with large numbers of parameters which are often referred to as non-parametric. We will use the term to refer to models in which we do not necessarily know the roles played by individual parameters, and inference is not primarily targeted at the parameters themselves, but rather at the predictions made by the models. These types of models are typical for applications in machine learning. From a non-Bayesian perspective, arguments are put forward for adjusting model complexity in the light of limited training data, to avoid over-fitting. Model complexity is often regulated by adjusting the number offree parameters in the model and sometimes complexity is further constrained by the use of regularizers (such as weight decay). If the model complexity is either too low or too high performance on an independent test set will suffer, giving rise to a characteristic Occam's Hill. Typically an estimator of the generalization error or an independent validation set is used to control the model complexity. From the Bayesian perspective, authors seem to take two conflicting stands on the question of model complexity. One view is to infer the probability of the model for each of several different model sizes and use these probabilities when making predictions. An alternate view suggests that we simply choose a "large enough" model and sidestep the problem of model size selection. Note that both views assume that parameters are averaged over. Example: Should we use Occam's Razor to determine the optimal number of hidden units in a neural network or should we simply use as many hidden units as possible computationally? We now describe these two views in more detail. 1.1 View 1: Model size selection One of the central quantities in Bayesian learning is the evidence, the probability of the data given the model P(YIMi) computed as the integral over the parameters W of the likelihood times the prior. The evidence is related to the probability of the model, P(MiIY) through Bayes rule: where it is not uncommon that the prior on models P(Mi) is flat, such that P(MiIY) is proportional to the evidence. Figure 1 explains why the evidence discourages overcomplex models, and can be used to selectl the most probable model. It is also possible to understand how the evidence discourages overcomplex models and therefore embodies Occam's Razor by using the following interpretation. The evidence is the probability that if you randomly selected parameter values from your model class, you would generate data set Y. Models that are too simple will be very unlikely to generate that particular data set, whereas models that are too complex can generate many possible data sets, so again, they are unlikely to generate that particular data set at random. 1.2 View 2: Large models In non-parametric Bayesian models there is no statistical reason to constrain models, as long as our prior reflects our beliefs. In fact, since constraining the model order (i.e. number of parameters) to some small number would not usually fit in with our prior beliefs about the true data generating process, it makes sense to use large models (no matter how much data you have) and pursue the infinite limit if you can2 • For example, we ought not to limit the number of basis functions in function approximation a priori since we don't really believe that the data was actually generated from a small number of fixed basis functions. Therefore, we should consider models with as many parameters as we can handle computationally. Neal [1996] showed how multilayer perceptrons with large numbers of hidden units achieved good performance on small data sets. He used sophisticated MCMC techniques to implement averaging over parameters. Following this line of thought there is no model complexity selection task: We don't need to evaluate evidence (which is often difficult) and we don't need or want to use Occam's Razor to limit the number of parameters in our model. 'We really ought to average together predictions from all models weighted by their probabilities. However if the evidence is strongly peaked, or for practical reasons, we may want to select one as an approximation. 2Por some models, the limit of an infinite number of parameters is a simple model which can be treated tractably. Two examples are the Gaussian Process limit of Bayesian neural networks [Neal, 1996], and the infinite limit of Gaussian mixture models [Rasmussen, 2000]. too complex y All possible data sets Figure 1: Left panel: the evidence as a function of an abstract one dimensional representation of "all possible" datasets. Because the evidence must "normalize", very complex models which can account for many data sets only achieve modest evidence; simple models can reach high evidences, but only for a limited set of data. When a dataset Y is observed, the evidence can be used to select between model complexities. Such selection cannot be done using just the likelihood, P(Y Iw, Mi). Right panel: neural networks with different numbers of hidden unit form a family of models, posing the model selection problem. 2 Linear in the parameters models - Example: the Fourier model For simplicity, consider function approximation using the class of models that are linear in the parameters; this class includes many well known models such as polynomials, splines, kernel methods, etc: y(x) = L Wi(Pi(X) {:} Y = W T <1>, where y is the scalar output, ware the unknown weights (parameters) of the model, (/>i(x) are fixed basis functions, <l>in = ¢i(X(n)) and x(n) is the (scalar or vector) input for example number n. For example, a Fourier model for scalar inputs has the form: where w weights: D y(x) = ao + Lad sin(dx) + bd cos(dx), d=l {ao,al,bl, ... ,aD,bD}' Assuming an independent Gaussian prior on the D p(wIS, c) ex: exp (- ~ [Coa~ + L cd(a~ + b~)]), d=l where S is an overall scale and Cd are precisions (inverse variances) for weights of order (frequency) d. It is easy to show that Gaussian priors over weights imply Gaussian Process priors over functions3. The covariance function for the corresponding Gaussian Process prior is: D K(x,x') = [Lcos(d(x-x'))/Cd]/S. d=O 3U nder the prior, the joint density of any (finite) set of outputs y is Gaussian Order 0 Order 1 Order 2 Order 3 Order 4 Order 5 2 2 2 2 2 .. _i.J . 0 .. "+ ... 0 -1 ct. -1 -1 -1 ... ... ... -2 -2 -2 -2 -1 0 1 -1 0 1 -1 0 1 -1 0 1 -1 0 1 -1 0 1 Order 6 Order 7 OrderS Order 9 Order 10 Order 11 2 2 2 2 -1 -1 -1 +i... ... ... ... -2 -2 -2 -2 -2 -1 0 1 -1 0 1 -1 0 1 -1 0 1 -1 0 1 -1 0 1 0.25 0.2 0.15 0.1 0.05 0 0 2 3 4 5 6 7 S 9 10 11 Model order Figure 2: Top: 12 different model orders for the "unscaled" model: Cd ex 1. The mean predictions are shown with a full line, the dashed and dotted lines limit the 50% and 95% central mass of the predictive distribution (which is student-t). Bottom: posterior probability of the models, normalised over the 12 models. The probabilities of the models exhibit an Occam's Hill, discouraging models that are either "too small" or "too big". 2.1 Inference in the Fourier model Given data V = {x(n), y(n) In = 1, ... ,N} with independent Gaussian noise with precision T, the likelihood is: N p(Ylx, w, T) ex II exp (- ~[y(n) W T <l>n]2). n=1 For analytical convenience, let the scale of the prior be proportional to the noise precision, S = CT and put vague4 Gamma priors on T and C: p(T) ex T<>1-1 exp(-,81T), p(C) ex C<>2-1 exp (-,82 C) , then we can integrate over weights and noise to get the evidence as a function of prior hyperparameters, C (the overall scale) and c (the relative scales): ff ,8<>1,8<>2r(a1+ N/ 2) E(C, c) = }} p(Ylx, w, T)p(wIC, T, c)p(T)p(C)dTdw = (~7r)~/2r(a1)r(a2) D x IA11/2 [,81 + ~Y T (J <I> A -1<1> T)yr<>1-N/2CD+<>2-1/2 exp( -,82C)~/2 II Cd, d=1 4We choose vague priors by setting al = a2 = fA = /32 = 0.2 throughout. Scaling Exponent=O Scaling Exponent=2 Scaling Exponent=3 Scaling Exponent=4 2 2 V'J~ o ~ a \" - 1 - 1 - 1 - 2 - 2 - 2 - 2 - 2 2 - 2 a 2 - 2 a - 2 2 Figure 3: Functions drawn at random from the Fourier model with order D = 6 (dark) and D = 500 (light) for four different scalings; limiting behaviour from left to right: discontinuous, Brownian, borderline smooth, smooth. where A = cpT cp + C diag(c), and the tilde indicates duplication of all components except for the first. We can optimizeS the overall scale C of the weights (using ego Newton's method). How do we choose the relative scales, c? The answer to this question turns out to be intimately related to the two different views of Bayesian inference. 2.2 Example To illustrate the behaviour of this model we use data generated from a step function that changes from -1 to 1 corrupted by independent additive Gaussian noise with variance 0.25. Note that the true function cannot be implemented exactly with a model of finite order, as would typically be the case in realistic modelling situations (the true function is not "realizable" or the model is said to be "incomplete"). The input points are arranged in two lumps of 16 and 8 points, the step occurring in the middle of the larger, see figure 2. If we choose the scaling precisions to be independent of the frequency of the contributions, Cd ex 1 (while normalizing the sum of the inverse precisions) we achieve predictions as depicted in figure 2. We clearly see an Occam's Razor behaviour. A model order of around D = 6 is preferred. One might say that the limited data does not support models more complex than this. One way of understanding this is to note that as the model order grows, the prior parameter volume grows, but the relative posterior volume decreases, because parameters must be accurately specified in the complex model to ensure good agreement with the data. The ratio of prior to posterior volumes is the Occam Factor, which may be interpreted as a penalty to pay for fitting parameters. In the present model, it is easy to draw functions at random from the prior by simply drawing values for the coefficients from their prior distributions. The left panel of figure 3 shows samples from the prior for the previous example for D = 6 and D = 500. With increasing order the functions get more and more dominated by high frequency components. In most modelling applications however, we have some prior expectations about smoothness. By scaling the precision factors Cd we can achieve that the prior over functions converges to functions with particular characteristics as D grows towards infinity. Here we will focus on scalings of the form Cd = d'Y for different values of ,,(, the scaling exponent. As an example, if we choose the scaling Cd = d3 we do not get an Occam's Razor in terms of the order of the model, figure 4. Note that the predictions and their errorbars become almost independent of the model order as long as the order is large enough. Note also that the errorbars for these large models seem more reasonable than for D = 6 in figure 2 (where a spurious "dip" between the two lumps of data is predicted with high confidence). With this choice of scaling, it seems that the "large models" view is appropriate. 50f course, we ought to integrate over C, but unfortunately that is difficult. Order 0 2 .. _i.J . 0 ·· .+ ... -1 t .. -2 2 o -1 -2 0.25 0.2 0.15 0.1 0.05 * -1 0 1 Order 6 -1 0 1 -1 -2 Order 1 -1 0 1 Order 7 -1 0 1 O~~~----~---o 2 2 -2 3 Order 2 -1 0 1 OrderS -1 0 1 4 2 5 Order 3 -1 0 1 Order 9 -1 0 1 6 7 Model order 2 Order 4 -1 0 1 Order 10 -1 0 1 S 9 2 o -1 -2 Order 5 -1 0 1 Order 11 -1 0 1 10 11 Figure 4: The same as figure 2, except that the scaling Cd = d3 was used here, leading to a prior which converges to smooth functions as D -t 00. There is no Occam's Razor; instead we see that as long as the model is complex enough, the evidence is flat. We also notice that the predictive density of the model is unchanged as long as D is sufficiently large. 3 Discussion In the previous examples we saw that, depending on the scaling properties of the prior over parameters, both the Occam's Razor view and the large models view can seem appropriate. However, the example was unsatisfactory because it is not obvious how to choose the scaling exponent 'Y. We can gain more insight into the meaning of'Y by analysing properties of functions drawn from the prior in the limit of large D. It is useful to consider the expected squared difference of outputs corresponding to nearby inputs, separated by ~: G(~) = E[(J(x) - f(x + ~))2l, in the limit as ~ -t O. In the table in figure 5 we have computed these limits for various values of 'Y, together with the characteristics of these functions. For example, a property of smooth functions is that G (~) <X ~ 2 . Using this kind of information may help to choose good values for 'Y in practical applications. Indeed, we can attempt to infer the "characteristics of the function" 'Y from the data. In figure 5 we show how the evidence depends on 'Y and the overall scale C for a model of large order (D = 200). It is seen that the evidence has a maximum around 'Y = 3. In fact we are seeing Occam's Razor again! This time it is not in terms of the dimension if the model, but rather in terms of the complexity of the functions under the priors implied by different values of 'Y. Large values of'Y correspond to priors with most probability mass on simple functions, whereas small values of'Y correspond to priors that allow more complex functions. Note, that the "optimal" setting 'Y = 3 was exactly the model used in figure 4. - 0.5 -1 6' ~-1 .5 .Q -2 -2.5 log Evidence (D=2oo. max=-27.48) 'Y <1 2 3 >3 limD-.--+o G(~} properties 1 discontinuous ~ Brownian ~2(1-ln~) borderline smooth ~2 smooth Figure 5: Left panel: the evidence as a function of the scaling exponent, 'Y and overall scale C, has a maximum at 'Y = 3. The table shows the characteristics of functions for different values of 'Y. Examples of these functions are shown in figure 3. 4 Conclusion We have reviewed the automatic Occam's Razor for Bayesian models and seen how, while not necessarily penalising the number of parameters, this process is active in terms of the complexity offunctions. Although we have only presented simplistic examples, the explanations of the behaviours rely on very basic principles that are generally applicable. Which of the two differing Bayesian views is most attractive depends on the circumstances: sometimes the large model limit may be computationally demanding; also, it may be difficult to analyse the scaling properties of priors for some models. On the other hand, in typical applications of non-parametric models, the "large model" view may be the most convenient way of expressing priors since typically, we don't seriously believe that the "true" generative process can be implemented exactly with a small model. Moreover, optimizing (or integrating) over continuous hyperparameters may be easier than optimizing over the discrete space of model sizes. In the end, whichever view we take, Occam's Razor is always at work discouraging overcomplex models. Acknowledgements This work was supported by the Danish Research Councils through the Computational Neural Network Center (CONNECT) and the THOR Center for Neuroinformatics. Thanks to Geoff Hinton for asking a puzzling question which stimulated the writing of this paper. References Jefferys, W. H. & Berger, J. O. (1992) Ockham's Razor and Bayesian Analysis. Amer. Sci., 80:64-72. MacKay, D. J. C. (1992) Bayesian Interpolation. Neural Computation, 4(3):415-447. Neal, R. M. (1996) Bayesian Learning for Neural Networks, Lecture Notes in Statistics No. 118, New York: Springer-Verlag. Rasmussen, C. E. (2000) The Infinite Gaussian Mixture Model, in S. A. Solla, T. K. Leen and K.-R. Muller (editors.), Adv. Neur. In! Proc. Sys. 12, MIT Press, pp. 554-560. Smith, A. F. M. & Spiegelhalter, D. J. (1980) Bayes factors and choice criteria for linear models. 1. Roy. Stat. Soc. , 42:213-220.
|
2000
|
10
|
1,754
|
Sparse Kernel Principal Component Analysis Michael E. Tipping Microsoft Research St George House, 1 Guildhall St Cambridge CB2 3NH, U.K. mtipping~microsoft.com Abstract 'Kernel' principal component analysis (PCA) is an elegant nonlinear generalisation of the popular linear data analysis method, where a kernel function implicitly defines a nonlinear transformation into a feature space wherein standard PCA is performed. Unfortunately, the technique is not 'sparse', since the components thus obtained are expressed in terms of kernels associated with every training vector. This paper shows that by approximating the covariance matrix in feature space by a reduced number of example vectors, using a maximum-likelihood approach, we may obtain a highly sparse form of kernel PCA without loss of effectiveness. 1 Introduction Principal component analysis (PCA) is a well-established technique for dimensionality reduction, and examples of its many applications include data compression, image processing, visualisation, exploratory data analysis, pattern recognition and time series prediction. Given a set of N d-dimensional data vectors X n , which we take to have zero mean, the principal components are the linear projections onto the 'principal axes', defined as the leading eigenvectors of the sample covariance matrix S = N-1Z=:=lXnX~ = N-1XTX, where X = (Xl,X2, ... ,XN)T is the conventionally-defined 'design' matrix. These projections are of interest as they retain maximum variance and minimise error of subsequent linear reconstruction. However, because PCA only defines a linear projection of the data, the scope of its application is necessarily somewhat limited. This has naturally motivated various developments of nonlinear 'principal component analysis' in an effort to model non-trivial data structures more faithfully, and a particularly interesting recent innovation has been 'kernel PCA' [4]. Kernel PCA, summarised in Section 2, makes use of the 'kernel trick', so effectively exploited by the 'support vector machine', in that a kernel function k(·,·) may be considered to represent a dot (inner) product in some transformed space if it satisfies Mercer's condition i.e. if it is the continuous symmetric kernel of a positive integral operator. This can be an elegant way to 'non-linearise' linear procedures which depend only on inner products of the examples. Applications utilising kernel PCA are emerging [2], but in practice the approach suffers from one important disadvantage in that it is not a sparse method. Computation of principal component projections for a given input x requires evaluation of the kernel function k(x, xn) in respect of all N 'training' examples Xn. This is an unfortunate limitation as in practice, to obtain the best model, we would like to estimate the kernel principal components from as much data as possible. Here we tackle this problem by first approximating the covariance matrix in feature space by a subset of outer products of feature vectors, using a maximum-likelihood criterion based on a 'probabilistic PCA' model detailed in Section 3. Subsequently applying (kernel) PCA defines sparse projections. Importantly, the approximation we adopt is principled and controllable, and is related to the choice of the number of components to 'discard' in the conventional approach. We demonstrate its efficacy in Section 4 and illustrate how it can offer similar performance to a full non-sparse kernel PCA implementation while offering much reduced computational overheads. 2 Kernel peA Although PCA is conventionally defined (as above) in terms of the covariance, or outer-product, matrix, it is well-established that the eigenvectors of XTX can be obtained from those of the inner-product matrix XXT. If V is an orthogonal matrix of column eigenvectors of XXT with corresponding eigenvalues in the diagonal matrix A, then by definition (XXT)V = VA. Pre-multiplying by X T gives: (XTX)(XTV) = (XTV)A. (1) From inspection, it can be seen that the eigenvectors of XTX are XTV, with eigenvalues A. Note, however, that the column vectors XTV are not normalised since for column i, llTXXTlli = AillTlli = Ai, so the correctly normalised eigenvectors of 1 XTX, and thus the principal axes of the data, are given by Vpca = XTVA -'. This derivation is useful if d > N, when the dimensionality of x is greater than the number of examples, but it is also fundamental for implementing kernel PCA. In kernel PCA, the data vectors Xn are implicitly mapped into a feature space by a set of functions {ifJ} : Xn -+ 4>(xn). Although the vectors 4>n = 4>(xn) in the feature space are generally not known explicitly, their inner products are defined by the kernel: 4>-:n4>n = k(xm, xn). Defining cp as the (notional) design matrix in feature space, and exploiting the above inner-product PCA formulation, allows the eigenvectors of the covariance matrix in feature spacel , S4> = N- l L:n 4>n4>~, to be specified as: 1 Vkpca=cpTVA-', (2) where V, A are the eigenvectors/values of the kernel matrix K, with (K)mn = k(xm,xn). Although we can't compute Vkpca since we don't know cp explicitly, we can compute projections of arbitrary test vectors x* -+ 4>* onto Vkpca in feature space: 4>~Vkpca = 4>~cpTVA -~ = k~VA-~, (3) where k* is the N -vector of inner products of x* with the data in kernel space: (k)n = k(x*,xn). We can thus compute, and plot, these projections Figure 1 gives an example for some synthetic 3-cluster data in two dimensions. lHere, and in the rest of the paper, we do not 'centre' the data in feature space, although this may be achieved if desired (see [4]). In fact, we would argue that when using a Gaussian kernel, it does not necessarily make sense to do so. 0.218 0.203 0.191 .' .' -.I.:fe··· :!.'-:~ '"F . . 0.057 0.053 0.051 ~~ : ~ .. -.. . . . . . . . ... . .. . .. : . .. 0.047 0.043 ~ .I:.' . . ' . . " . . 0.036 Figure 1: Contour plots of the first nine principal component projections evaluated over a region of input space for data from 3 Gaussian clusters (standard deviation 0.1; axis scales are shown in Figure 3) each comprising 30 vectors. A Gaussian kernel, exp( -lIx-x'112 /r2), with width r = 0.25, was used. The corresponding eigenvalues are given above each projection. Note how the first three components 'pick out' the individual clusters [4]. 3 Probabilistic Feature-Space peA Our approach to sparsifying kernel peA is to a priori approximate the feature space sample covariance matrix Sq, with a sum of weighted outer products of a reduced number of feature vectors. (The basis of this technique is thus general and its application not necessarily limited to kernel peA.) This is achieved probabilistically, by maximising the likelihood of the feature vectors under a Gaussian density model ¢ ~ N(O, C), where we specify the covariance C by: N C = (721 + L Wi¢i¢r = (721 + c)TWC), (4) i=1 where W1 ... WN are the adjustable weights, W is a matrix with those weights on the diagonal, and (72 is an isotropic 'noise' component common to all dimensions of feature space. Of course, a naive maximum of the likelihood under this model is obtained with (72 = a and all Wi = 1/ N. However, if we fix (72, and optimise only the weighting factors Wi, we will find that the maximum-likelihood estimates of many Wi are zero, thus realising a sparse representation of the covariance matrix. This probabilistic approach is motivated by the fact that if we relax the form of the model, by defining it in terms of outer products of N arbitrary vectors Vi (rather than the fixed training vectors), i.e. C = (721+ l:~1 WiViV'[, then we realise a form of 'probabilistic peA' [6]. That is, if {Ui' Ai} are the set of eigenvectors/values of Sq" then the likelihood under this model is maximised by Vi = Ui and Wi = (Ai _(72)1/2, for those i for which Ai > (72. For Ai :::; (72, the most likely weights Wi are zero. 3.1 Computations in feature space We wish to maximise the likelihood under a Gaussian model with covariance given by (4). Ignoring terms independent of the weighting parameters, its log is given by: (5) Computing (5) requires the quantities ICI and (VC-1rP, which for infinite dimensionality feature spaces might appear problematic. However, by judicious re-writing of the terms of interest, we are able to both compute the log-likelihood (to within a constant) and optimise it with respect to the weights. First, we can write: log 1(T21 + 4)TW4) I = D log (T2 + log IW-1 + (T-24)4)TI + log IWI. (6) The potential problem of infinite dimensionality, D, of the feature space now enters only in the first term, which is constant if (T2 is fixed and so does not affect maximisation. The term in IWI is straightforward and the remaining term can be expressed in terms of the inner-product (kernel) matrix: W-1 + (T-24)4)T = W-1 + (T-2K, where K is the kernel matrix such that (K)mn = k(xm , xn). (7) For the data-dependent term in the likelihood, we can use the Woodbury matrix inversion identity to compute the quantities rP~C-lrPn: rP~((T21 + 4)W4)T)-lrPn = rP~ [(T-21 - (T-44)(W-1 + (T-24)T4»)-14)TJ rPn' = (T-2k(xn, xn) (T-4k~(W-l + (T-2K)-lkn, with k n = [k(xn, xt), k(xn, X2), ... ,k(xn, XN )r· 3.2 Optimising the weights (8) To maximise the log-likelihood with respect to the Wi, differentiating (5) gives us: {)C = ! (A.TC-14)T4)C-1A.. _ NA.TC-1A..) (9) {)Wi 2 '1', '1', '1', '1'" = 2~2 (t M~i + N};,ii NWi) , (10) , n=l where};, and I-Ln are defined respectively by };, = (W-1 + (T-2K)-1, I-Ln = (T-2};'kn. Setting (10) to zero gives re-estimation equations for the weights: N new N-1 '"" 2 + ~ Wi = ~ Mni L<ii· n=l (11) (12) (13) The re-estimates (13) are equivalent to expectation-maximisation updates, which would be obtained by adopting a factor analytic perspective [3], and introducing a set of 'hidden' Gaussian explanatory variables whose conditional means and common covariance, given the feature vectors and the current values of the weights, are given by I-Ln and};, respectively (hence the notation). As such, (13) is guaranteed to increase C unless it is already at a maximum. However, an alternative re-arrangement of (10), motivated by [5], leads to a re-estimation update which typically converges significantly more quickly: ",N 2 W new = L....n-l JJni (14) , N(1 ~idwi)· Note that these Wi updates (14) are defined in terms of the computable (i.e. not dependent on explicit feature space vectors) quantities ~ and /Ln. 3.3 Principal component analysis The principal axes Sparse kernel peA proceeds by finding the principal axes of the covariance model C = (721 + c)TWc). These are identical to those of c)TWc), but with eigenvalues .-... 1 .-...T ........ all (72 larger. Letting c) = W2c), then, we need the eigenvectors of c) c). Using the technique of Section 2, if the eigenvectors of ~~T = W!c)c)TW! = W!KW! are U, with corresponding eigenvalues X, then the eigevectors/values {U, A} of C that we desire are given by: Computing projections (15) (16) Again, we can't compute the eigenvectors U explicitly in (15), but we can compute the projections of a general feature vector cPo onto the principal axes: (17) where k. is the sparse vector containing the non-zero weighted elements of k., 1 defined earlier. The corresponding rows of W!UX-2 are combined into a single projecting matrix P, each column of which gives the coefficients of the kernel functions for the evaluation of each principal component. 3.4 Computing Reconstruction Error The squared reconstruction error in kernel space for a test vector cPo is given by: (18) with K the kernel matrix evaluated only for the representing vectors. 4 Examples To obtain sparse kernel peA projections, we first specify the noise variance (72, which is the the amount of variance per co-ordinate that we are prepared to allow to be explained by the (structure-free) isotropic noise rather than with the principal axes (this choice is a surrogate for deciding how many principal axes to retain in conventional kernel peA). Unfortunately, the measure is in feature space, which makes it rather more difficult to interpret than if it were in data space (equally so, of course, for interpretation of the eigenvalue spectrum in the non-sparse case). We apply sparse kernel peA to the Gaussian data of Figure 1 earlier, with the same kernel function and specifying (J' = 0.25, deliberately chosen to give nine representing kernels so as to facilitate comparison. Figure 2 shows the nine principal component projections based on the approximated covariance matrix, and gives qualitatively equivalent results to Figure 1 while utilising only 10% of the kernels. Figure 3 shows the data and highlights those examples corresponding to the nine kernels with nonzero weights. Note, although we do not consider this aspect further here, that these representing vectors are themselves highly informative of the structure of the data (i. e. with a Gaussian kernel, for example, they tend to represent distinguishable clusters). Also in Figure 3, contours of reconstruction error, based only on those nine kernels, are plotted and indicate that the nonlinear model has more faithfully captured the structure of the data than would standard linear peA. 0.199 00 o o -.0 OJ: °l'or ..... • •• 2t. rc. 0 0 0.082 0.074 0.184 0.161 0.074 0.074 0.072 0.071 Figure 2: The nine principal component projections obtained by sparse kernel peA. To further illustrate the fidelity of the sparse approximation, we analyse the 200 training examples of the 7-dimensional 'Pima Indians diabetes' database [1]. Figure 4 (left) shows a plot of reconstruction error against the number of principal components utilised by both conventional kernel peA and its sparse counterpart, with (J'2 chosen so as to utilise 20% of the kernels (40). An expected small reduction in accuracy is evident in the sparse case. Figure 4 (right) shows the error on the associated test set when using a linear support vector machine to classify the data based on those numbers of principal components. Here the sparse projections actually perform marginally better on average, a consequence of both randomness and, we note with interest, presumably some inherent complexity control implied by the use of a sparse approximation. Figure 3: The data with the nine representing kernels circled and contours of reconstruction error (computed in feature space although displayed as a function of x) overlaid. ~ 100 C e .Q 0.15 ffi t5 Q) 2 (/) t5 0.1 t5 80 C 0 Q) ~ 0.05 I70 a: 0 60 0 5 10 15 20 25 0 5 10 15 20 25 Figure 4: RMS reconstruction error (left) and test set misclassifications (right) for numbers ofretained principal components ranging from 1- 25. For the standard case, this was based on all 200 training examples, for the sparse form, a subset of 40. A Gaussian kernel of width 10 was utilised, which gives near-optimal results if used in an SVM classification. References [1] B. D. Ripley. Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge, 1996. [2] S. Romdhani, S. Gong, and A. Psarrou. A multi-view nonlinear active shape model using kernel PCA. In Proceedings of the 1999 British Machine Vision Conference, pages 483- 492, 1999. [3] D. B. Rubin and D. T. Thayer. EM algorithms for ML factor analysis. Psychometrika, 47(1):69- 76, 1982. [4] B. Sch6lkopf, A. Smola, and K-R. Miiller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299- 1319, 1998. Technical Report No. 44, 1996, Max Planck Institut fiir biologische Kybernetik, Tiibingen. [5] M. E. Tipping. The Relevance Vector Machine. In S. A. Solla, T. KLeen, and K-R. Miiller, editors, Advances in Neural Information Processing Systems 12, pages 652- 658. Cambridge, Mass: MIT Press, 2000. [6] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal of the Royal Statistical Society, Series B, 61(3):611-622, 1999.
|
2000
|
100
|
1,755
|
Sparsity of data representation of optimal kernel machine and leave-one-out estimator A. Kowalczyk Chief Technology Office, Telstra 770 Blackburn Road, Clayton, Vic. 3168, Australia (adam.kowalczy k@team.telstra.com) Abstract Vapnik's result that the expectation of the generalisation error ofthe optimal hyperplane is bounded by the expectation of the ratio of the number of support vectors to the number of training examples is extended to a broad class of kernel machines. The class includes Support Vector Machines for soft margin classification and regression, and Regularization Networks with a variety of kernels and cost functions. We show that key inequalities in Vapnik's result become equalities once "the classification error" is replaced by "the margin error", with the latter defined as an instance with positive cost. In particular we show that expectations of the true margin error and the empirical margin error are equal, and that the sparse solutions for kernel machines are possible only if the cost function is "partially" insensitive. 1 Introduction Minimization of regularized risk is a backbone of several recent advances in machine learning, including Support Vector Machines (SVM) [13], Regularization Networks (RN) [5] or Gaussian Processes [15]. Such a machine is typically implemented as a weighted sum of a kernel function evaluated for pairs composed of a data vector in question and a number of selected training vectors, so called support vectors. For practical machines it is desired to have as few support vectors as possible. It has been observed empirically that SVM solutions have often very few support vectors, or that they are sparse, while RN machines are not. The paper shows that this behaviour is determined by the properties of the cost function used (its partial insensitivity, to be precise). Another motivation for interest in sparsity of solutions comes from celebrated result of Vapnik [13] which links the number of support vectors to the generalization error of SVM via a bound on leave-one-out estimator [9]. This result has been originally shown for a special case of classification with hard margin cost function (optimal hyperplane). The papers by Opper and Winther [10], Jaakkola and Haussler [6], and Joachims [7] extend Vapnik's result in the direction of bounds for classification error of SVM's. The first of those papers deals with the hard margin case, while the other two derive tighter bounds on classification error of the soft margin SVMs with [-insensitive linear cost. In this paper we extend Vapnik's result in another direction. Firstly, we show that it holds for to a wide range of kernel machines optimized for a variety of cost functions, for both classification and regression tasks. Secondly, we find that Vapnik's key inequalities become equalities once "the misclassification error" is replaced by "the margin error" (defined as the rate of data instances incurring positive costs). In particular, we find that for margin errors the following three expectations: (i) of the empirical risk, (ii) of the the true risk and (iii) of the leave-one-out risk estimator are equal to each other. Moreover, we show that they are equal to the expectation of the ratio of support vectors to the number of training examples. The main results are given in Section 2. Brief discussion of results is given in Section 3. 2 Main results Given an l-sample {(Xl, yd, .... , (XI, YI)} of patterns Xi E Xc IRn and target values Yi E Y c IR. The learning algorithms used by SVMs [13], RNs [5] or Gaussian Processes [15] minimise the regularized risk functional of the form: I A min Rreg[J, bj = L C(Xi,Yi, ~i[f, b]) + -21Ifll~. (f,b) E1f. xlR i=l (1) Here 1£ denotes a reproducing kernel Hilbert space (RKHS) [1], 11.111f. is the corresponding norm, A > 0 is a regularization constant, C : X X Y x IR -+ IR+ is a non-negative cost function penalising for the deviation ~i[f, bj = Yi - iii of the estimator iii := f(Xi) + (3b from target Yi at location Xi, b E IR is a constant (bias) and (3 E {O, 1} is another constant ((3 = 0 is used to switch the bias off). The important Representer Theorem [8,4] states that the minimizer (1) has the expansion: I f(x) = L Qik(Xi,X), (2) i=l where k : X x X -+ IR is the kernel corresponding to the RKHS 1£. In the following section we shall show that under general assumptions this expansion is unique. If Qi '" 0, then Xi is called a support vector of f(.). 2.1 Unique Representer Theorem We recall, that a function is called a real analytic function on a domain c IRq if for every point of this domain the Taylor series for the function converges to this function in some neighborhood of that point. 1 A proof of the following crucial Lemma is omitted due to lack of space. Lemma 2.1. If cp : X -+ IR is an analytic function on an open connected subset X c IRn, then the subset cp-l (0) C X is either equal to X or has Lebesgue measure O. Analyticity is essential for the above result and the result does not hold even for functions infinitely differentiable, in general. Indeed, for every closed subset V C IRn there exists an infinitely differentiable function (COO) on IRn such that (,1>-1(0) = V and there exist closed subsets with positive Lebesgue measure and empty interior. Hence the Lemma, and consequently the subsequent results, do not hold for the broader class of all Coo functions. I Examples of analytic functions are polynomials. The ordinary functions such as sin( x), cos( x) and exp(x) are examples of non-polynomial analytic functions. The function 'IjJ(x) := exp( -1/x 2 ) for x > 0 and 0, otherwise, is an example of infinitely differentiable function of the real line but not analytic (locally it is not equal to its Taylor series expansion at zero). Standing assumptions. The following is assumed. 1. The set X C IRn is open and connected and either Y = {± I} (the case of classification) or Y C IR is an open segment (the case of regression). 2. The kernel k : X x X -+ IR is a real analytic function on its domain. 3. The cost function ~~c(x , Y,~) is convex, differentiable on IR and c(x, y, 0) = 0 for every (x,y) E X x Y. It can be shown that c(x, Y,~) > 0 ¢:} (3) 4. lisafixedinteger, 1 < l ~ dim(1l),andthetrainingsample(xl,yt}, ... ,(XI,YI) is iid drawn from a continuous probability density p( x, y) on X x Y. 5. The phrase "with probability I" will mean with probability 1 with respect to the selection of the training sample. Note that standard polynomial kernel k(x,x') = (1 + X· x')d, X,X' E IRn, satisfies the above assumptions with dim(1l) = (n~d). Similarly, the Gaussian kernel k(x, x') = exp( -llx - x' W / a) satisfies them with dim(1l) = 00. Typical cost functions such as the super-linear loss functions cp(x, Y,~) = (y~)~ := (max(O,y~))P used for SVM classification, or CpE(X,y,~) = (I~I f)~ used for SVM regression, or the super-linear loss cp(x, y,~) = I~IP for p > 1 for RN regression, satisfy the above assumptions2. Similarly, variations of Huber robust loss [11, 14] satisfy those assumptions. The following result strengthens the Representer Theorem [8, 4] Theorem 2.2. If l ~ dim1l, then both, the minimizer of the regularized risk (1) and its expansion (2) are unique with probability 1. Proof outline. Convexity of the functional (1, b)~Rr eg[j, bj and its strict convexity with respect to ! E 1l implies the uniqueness of ! E 1l minimizing the regularized risk (1); cf.[3]. From the assumption that l ~ dim 1l we derive the existence of Xl, ... , Xl E X such that the functions !(Xi, .), i = 1, ... , l, are linearly independent. Equivalently, the following Gram determinant is # 0: </J(Xl, ... ,Xl) := det[(k(xi, .), k(xj, ·)hih::;i,j::;/ = det[k(xi,Xj)h::;i,j::;1 # O. Now Lemma 2.1 implies that </J(Xl, ... , Xl) # 0 with probability 1, since </J : Xl -+ IR is an analytic function. Hence functions k(Xi' .) are linearly independent and the expansion (2) is unique with probability 1. Q.E.D. 2.2 Leave-one-out estimator In this section the minimizer (1) for the whole data sequence of l-training instances and some other objects related to it will be additionally marked with superscript '(l)'. The superscript' (l\ i)' will be used analogously to mark objects corresponding to the minimizer of (1) for the reduced training sequence, with ith instance removed. Lemma 2.3. With probability l,for every i E {I, ... , l}: a(l)i#O ¢:} C(Xi'Yi,~d!(l),b(I)]»O, (4) a(l) i # 0 ¢:} C(Xi' Yi, ~d!(l\i) ,b(l\i)]) > O. (5) 2Note that in general, if a function ¢> : IR --+ IR is convex, differentiable and such that d¢>/d~(O) = 0, then the cost function c( x, y,~) := ¢>( (~)+) is convex and differentiable. Proof outline. With probability 1, functions k(xj, .), j = 1, ... , I, are linearly independent (cf. the proof of Theorem 2.2) and there exists a feature map <1> : X -+ ~l such that vectors Zj := <1>(Xj), i = 1, ... ,1 are linearly independent, k(xj,x) = Zj' <1>(x) and fU)(x) = zU) . <1>(x) + (JbU) for every x E X, where zU) := L~=l aU)jzj. The pair (zU), bU)) minimizes the function (6) where ~j (z, b) := Yj - Z . Zj (Jb. This function is differentiable due to the standing assumptions on the cost e. Hence, necessarily gradRreg = 0, at the minimum (zU), bU)), which due to the linear independence of vectors Zj, gives o:U) . = -! ae (x . y ' i. (zU) bU))) J A a~ J' J' <"J , (7) for every j = 1, ... , I. This equality combined with equivalence (3) proves (4). Now we proceed to the proof of (5). Note that the pair (zU\i) , bU\i)), where zU\i) .L~"'i aU\i) jZj, corresponds in the feature space to the minimizer (JU\i) , bU\i)) of the reduced regularized risk: flU\i) (z b) .= reg , . Sufficiency in (5). From (4) and characterization (7) ofthe critical point it follows immediately that if aU) i = 0, then the minimizers for the full and reduced data sets are identical. Necessity in (5). A supposition of aU\ :j:. 0 and e(xi' Yi, ~df(l\i), bU\i)]) = 0 leads to a contradiction. Indeed, from (4), e( xi, Yi, ~i (zU) , bU))) > 0, hence: flU) (zU\i) bU\i)) = flU\i) (zU\i) b(l\i)) reg' reg' < fl(l\i) (z(l) b(l)) = fl(l) (z(l) b(l)) - e(x· y' i ·(z(l) b(l))) reg' reg' t, t, <"t , < fl(l) (z(l) , b(l)) = min R(l) (z, b). reg (z,b)ER1xR reg This contradiction completes the proof. Q.E.D. We say that Xi is a sensitive support vector if a(l) i :j:. 0 and f(l) :j:. f(l\i) , i.e., if its removal from the training set changes the solution. Corollary 2.4. Every support vector is sensitive with probability 1. Proof. If ai :j:. 0, then the vector z(l) f/. LinR(zl, .... , Zi-l, Zi+l, ... , Zl) since z(l) has a non-trivial component aiZi in the direction of ith feature vector Zi, while z(l\i) E LinR(zl, .... , Zi-l, Zi+1, ... , Zl). Thus z(l) and z(l\i) have different directions in LinR(zl, ... , Zl) C Z and there exists j' E {l, ... , I} such that f U) (Xjl) :j:. f(l\i) (Xjl). Q.E.D. We define the empirical risk and the expected (true) risk of margin error L~=lI{c(xi'Yi,e;[f,b]»O} _ #{i; e(xi'Yi'~i [f,b]) > O} I I Remp[!,bj Prob[e(x, Y, y - f(x) - (Jb) > OJ, where (f, b) E 1i x lR, 10 denotes the indicator function and # denotes the cardinality (number of elements) of a set. From the above Lemma we obtain immediately the following result: Corollary 2.5. With probability 1: #{ '. ( . . I(l\i) ( .) + (3b(l\i)) > O} #{'. (l) . ...t. O} Z , eXt, Yt, Xt = Z ,a t T = R [/(1) btl)] l l emp,· There exist counter-examples showing the phrase "with probability 1" above cannot be omitted. The sum on L.H.S. above is the leave-one-out estimator of the risk of margin error [14] for the minimizer of regularized risk (1). The above corollary shows that this estimator is uniquely determined by the number of support vectors as well as the number of training margin errors. Now from the Lunts-Brailovsky Theorem [14, Theorem 10.8] applied to the risk Q(x, Yj I, b) := I{c(x,y,y-!(x)-/3b>O} the following result is obtained. Theorem 2.6. E[Rexp(f(I-l) , b(l-l))] = E[Remp(f(l) , btl))] = E[#{i j ~(l)i :I O}], (8) where the first expectation is in the selection of training (l - 1)-sample and the remaining two are with respect to the selection of training l-sample. A cost function is called partially insensitive if there exists (x, y) E X x Y and 6 :I 6 such that c(x, y, ed = c(x, y, 6) = O. Otherwise, the cost c is called sensitive. Typical SVM cost functions are partially insensitive while typical RN cost functions are sensitive. The following result can be derived from Theorem 2.6 and Lemma 2.3. Corollary 2.7. If the number of support vectors is < l with aprobabi/ity > 0, then the cost function has to be partially insensitive. Typical cost functions penalize for an allocation of a wrong sign, i.e. 'v'(X,y,Y)EXXYXIR yf) < 0 ~ c(x, y, y - f)) > O. (9) Let us define the risk of misclassification of the kernel machine f)(x) = I(x) + (3b for (f,b) E 1i x ~asRclas[/,b]:= Prob[yf)(x) < 0]. Assuming (9),wehaveRclas[/,b] :::; Rexp[/, b]. Combining this observation with (8) we obtain an extension of Vapnik's result [14, Theorem 10.5]: Corollary 2.8. If condition (9) holds then E[R (/(1-1) b(l-l))] < E[#{i j a(l)i :I O}] - E[R (/(1) btl))] clas , l emp , . (10) Note that the original Vapnik's result consists in an inequality analogous to the inequality in the above condition for the specific case of classification by optimal hyperplanes (hard margin support vector machines). 3 Brief Discussion of Results Essentiality of assumptions. For every formal result in this paper and any of the standing assumption there exists an example of a minimizer of (1) which violates the conclusions of the result. In this sense all those assumptions are essential. Linear combinations of admissible cost functions. Any weighted sum of cost functions satisfying our Standing Assumption 3 will satisfy this assumption as well, hence our formalism will apply to it. An illustrative example is the following cost function for classification c(x, y,~) = L:j Cj (max: (0, y(~ f.j) )pj , where Cj > 0, f.j 2: 0 and Pj > 1 are constants andy E Y = {±1}. Non-differentiable cost functions. Our formal results can be extended with minor modifications to the case of typical, non-differentiable linear cost function such as c = (y~)+ = max:(O, y~) for SVM classification, c = (I~I - f.)+ for SVM regression and to the classification with hard margins SVMs (optimal hyperplanes). Details are beyond the scope of this paper. Note that the above linear cost functions can be uniformly approximated by differentiable cost functions, e.g. by Huber cost function [11, 14], to which our formalism applies. This implies that our formalism "applies approximately" to the linear loss case and some partial extension of it can be obtained directly using some limit arguments. However, using direct algebraic approach based on an evaluation of Kuhn-Tucker conditions one can come to stronger conclusions. Details will be presented elsewhere. Theory of generalization. Equality of expectations of empirical and expected risk provided by Theorem 2.6 implies that minimizers of regularized risk (1) are on average consistent. We should emphasize that this result holds for small training samples, of the size I smaller than VC dimension of the function class, which is dim(1i) + 1 in our case. This should be contrasted with uniform convergence bounds [2, 13, 14] which are vacuous unless I > > VC dimension. Significance of approximate solutions for RNs. Corollary 2.7 shows that sparsity of solutions is practically not achievable for optimal RN solutions since they use sensitive cost functions. This emphasizes the significance of research into approximately optimal solution algorithms in such a case, cf. [12]. Application to selection of the regularization constant. The bound provided by Corollary 2.8 and the equivalence given by Theorem 2.6 can be used as a justification of a heuristic that the optimal value of regularization constant A is the one which minimizes the number of margin errors (cf. [14]). This is especially appealing in the case of regression with f.-insensitive cost, where the margin error has a straightforward interpretation of sample being outside of the f.-tube. Application to modelling of additive noise. Let us suppose that data is iid drawn form the distribution of the form y = f(x) + f.noise, where f.noise is a random noise independent of x, with 0 mean. Theorem 2.6 implies the following heuristic for approximation of the noise distribution in the regression model y = f (x) + f.noise : #{i; a(l) :I O} PrOb[f.noise > f.ll::::: I . Here (f(l) , b(l)) is a minimizer ofthe regularized risk (1) with an f.-insensitive cost function, i.e. such that c(x, y,~) > 0 iff I~I > f.. Acknowledgement. The permission of the Chief Technology Officer, Telstra, to publish this paper, is gratefully acknowledged. References [1] N. Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society, 68:337 - 404, 1950. [2] P. Bartlett and J. Shave-Taylor. Generalization performance of support vector machines and other pattern classifiers. In B. Scholkopf, et. al., eds., Advances in Kernel Methods, pages 43-54, MIT Press, 1998. [3] C. Burges and D. J. Crisp. Uniqueness of the SVM solution. In S. Sola et. al., ed., Adv. in Neural Info. Proc. Sys. 12, pages 144-152, MIT Press, 2000. [4] D. Cox and F. O'Sullivan. Asymptotic analysis of penalized likelihood and related estimators. Ann. Statist., 18:1676-1695,1990. [5] F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks architectures. Neural Computation, 7(2):219-269,1995. [6] T. Jaakkola and D. Haussler. Probabilistic kernel regression models. In Proc. Seventh Work. onAI and Stat. , San Francisco, 1999. Morgan Kaufman. [7] T. Joachims. Estimating the Generalization Performance of an SVM Efficiently. In Proc. of the International Conference on Machine Learning, 2000. Morgan Kaufman. [8] G. Kimeldorf and G. Wahba. A correspondence between Bayesian estimation of stochastic processes and smoothing by splines. Ann. Math. Statist., 41 :495-502, 1970. [9] A. Lunts and V. Brailovsky. Evaluation of attributes obtained in statistical decision rules. Engineering Cybernetics, 3:98-109, 1967. [10] M. Opper and O. Winther. Gaussian process classification and SVM: Mean field results and leave-one out estimator. In P. Bartlett, et. al eds., Advances in Large Margin Classifiers, pages 301-316, MIT Press, 2000. [11] A. Smola and B. Scholkopf. A tutorial on support vector regression. Statistics and Computing, 1998. In press. [12] A. 1. Smola and B. Scholkopf. Sparse greedy matrix approximation for machine learning. Typescript, March 2000. [13] V. Vapnik. The Nature of Statistical Learning Theory. Springer Verlag, New York, 1995. [14] V. Vapnik. Statistical Learning Theory. Wiley, New York, 1998. [15] C. K. I. Williams. Prediction with Gaussian processes: From linear regression to linear prediction and beyond. In M. I. Jordan, editor, Learning and Inference in Graphical Models. Kluwer, 1998.
|
2000
|
101
|
1,756
|
Rate-coded Restricted Boltzmann Machines for Face Recognition Vee WhyeTeh Department of Computer Science University of Toronto Toronto M5S 2Z9 Canada ywteh@cs.toronto.edu Geoffrey E. Hinton Gatsby Computational Neuroscience UnitUniversity College London London WCIN 3AR u.K. hinton@ gatsby. ucl.ac. uk Abstract We describe a neurally-inspired, unsupervised learning algorithm that builds a non-linear generative model for pairs of face images from the same individual. Individuals are then recognized by finding the highest relative probability pair among all pairs that consist of a test image and an image whose identity is known. Our method compares favorably with other methods in the literature. The generative model consists of a single layer of rate-coded, non-linear feature detectors and it has the property that, given a data vector, the true posterior probability distribution over the feature detector activities can be inferred rapidly without iteration or approximation. The weights of the feature detectors are learned by comparing the correlations of pixel intensities and feature activations in two phases: When the network is observing real data and when it is observing reconstructions of real data generated from the feature activations. 1 Introduction Face recognition is difficult when the number of individuals is large and the test and training images of an individual differ in expression, pose, lighting or the date on which they were taken. In addition to being an important application, face recognition allows us to evaluate different kinds of algorithm for learning to recognize or compare objects, since it requires accurate representation of fine discriminative features in the presence of relatively large within-individual variations. This is made even more difficult when there are very few exemplars of each individual. We start by describing a new unsupervised learning algorithm for a restricted form of Boltzmann machine [1]. We then show how to generalize the generative model and the learning algorithm to deal with real-valued pixel intensities and rate-coded feature detectors. We then apply the model to face recognition and compare it to other methods. 2 Inference and learning in Restricted Boltzmann Machines A Restricted Boltzmann machine (RBM) [2] is a Boltzmann machine with a layer of visible units and a single layer of hidden units with no hidden-to-hidden nor visible-to-visible ·Correspondence address data time = 0 reconstruction 1 fantasy 00 Figure 1: Alternating Gibbs sampling and the terms in the learning rules of a RBM. connections. Because there is no explaining away [3], inference in an RBM is much easier than in a general Boltzmann machine or in a causal belief network with one hidden layer. There is no need to perform any iteration to determine the activities of the hidden units, as the hidden states, Sj, are conditionally independent given the visible states, Si . The distribution of Sj is given by the standard logistic function: 1 p(Sj = llsi) = 1 + exp( _ Li WijSi) (1) Conversely, the hidden states of an RBM are marginally dependent so it is easy for an RBM to learn population codes in which units may be highly correlated. It is hard to do this in causal belief networks with one hidden layer because the generative model of a causal belief net assumes marginal independence. An RBM can be trained using the standard Boltzmann machine learning algorithm which follows a noisy but unbiased estimate of the gradient of the log likelihood of the data. One way to implement this algorithm is to start the network with a data vector on the visible units and then to alternate between updating all of the hidden units in parallel and updating all of the visible units in parallel with Gibbs sampling. Figure 1 illustrates this process. If this alternating Gibbs sampling is run to equilibrium, there is a very simple way to update the weights so as to minimize the Kullback-Leibler divergence, QOIIQoo, between the data distribution, QO, and the equilibrium distribution of fantasies over the visible units, Qoo, produced by the RBM [4]: (2) where < SiSj >Qo is the expected value of SiSj when data is clamped on the visible units and the hidden states are sampled from their conditional distribution given the data, and <SiSj>Q~ is the expected value of SiSj after prolonged Gibbs sampling. This learning rule does not work well because it can take a long time to approach equilibrium and the sampling noise in the estimate of < SiSj >Q~ can swamp the gradient. Hinton [1] shows that it is far more effective to minimize the difference between QOllQoo and Q111Qoo where Q1 is the distribution of the one-step reconstructions of the data that are produced by first picking binary hidden states from their conditional distribution given the data and then picking binary visible states from their conditional distribution given the hidden states. The exact gradient of this "contrastive divergence" is complicated because the distribution Q1 depends on the weights, but this dependence can safely be ignored to yield a simple and effective learning rule for following the approximate gradient of the contrastive divergence: (3) 3 Applying RBMs to face recognition For images of faces, binary pixels are far from ideal. A simple way to increase the representational power without changing the inference and learning procedures is to imagine that each visible unit, i, has 10 replicas which all have identical weights to the hidden units. So far as the hidden units are concerned, it makes no difference which particular replicas are turned on: it is only the number of active replicas that counts. So a pixel can now have 11 different intensities. During reconstruction of the image from the hidden activities, all the replicas can share the computation of the probability, Pi, of turning on, and then we can select n replicas to be on with probability (~)nPi (10 - n)(1-p;). We actually approximated this binomial distribution by just adding a little Gaussian noise to lOpi and rounding. The same trick can be used for the hidden units. Eq. 3 is unaffected except that Si and Sj are now the number of active replicas. The replica trick can be seen as a way of simulating a single neuron over a time interval in which it may produce multiple spikes that constitute a rate-code. For this reason we call the model "RBMrate". We assumed that the visible units can produce up to 10 spikes and the hidden units can produce up to 100 spikes. We also made two further approximations: We replaced Si and Sj in Eq. 3 by their expected values and we used the expected value of Si when computing the probability of activation of the hidden units. However, we continued to use the stochastically chosen integer firing rates of the hidden units when computing the one-step reconstructions of the data, so the hidden activities cannot transmit an unbounded amount of information from the data to the reconstruction. A simple way to use RBMrate for face recognition is to train a single model on the training set, and to identify a face by finding the gallery image that produces a hidden activity vector that is most similar to the one produced by the face. This is how eigenfaces are used for recognition, but it does not work well because it does not take into account the fact that some variations across faces are important for recognition, while some variations are not. To correct this, we instead trained an RBMrate model on pairs of different images of the same individual, and then we used this model of pairs to decide which gallery image is best paired with the test image. To account for the fact that the model likes some individual face images more than others, we define the fit between two faces hand 12 as G(h, h) + G(h,h) - G(h,h) - G(h,h) where the goodness score G(VI,V2) is the negative free energy of the image pair VI, V2 under the model. Weight-sharing is not used, hence G ( VI, V2) ::p G (V2, VI). However, to preserve symmetry, each pair of images of the same individual VI, V2 in the training set has a reversed pair V2, VI in the set. We trained the model with 100 hidden units on 1000 image pairs (500 distinct pairs) for 2000 iterations in batches of 100, with a learning rate of 2.5 x 10-6 for the weights, a learning rate of 5 x 10-6 for the biases, and a momentum of 0.95. One advantage of eigenfaces over correlation is that once the test image has been converted into a vector of eigenface activations, comparisons of test and gallery images can be made in the low-dimensional space of eigenface activations rather than the high-dimensional space of pixel intensities. The same applies to our face-pair network, as the goodness score of an image pair is a simple function of the total input received by each hidden unit from each image. The total inputs from each gallery image can be precomputed and stored, while the total inputs from a test image only needs to be computed once for comparisons with all gallery images. 4 The FERET database Our version of the FERET database contained 1002 frontal face images of 429 individuals taken over a period of a few years under varying lighting conditions. Of these images, 818 are used as both the gallery and the training set and the remaining 184 are divided into four disjoint test sets: The .6.expression test set contains 110 images of different individuals. These individuals all have another image in the training set that was taken with the same lighting conditions Figure 2: Images are normalized in five stages: a) Original image; b) Locate centers of eyes by hand; c) Rotate image; d) Crop image and subsample at 56 x 56 pixels; e) Mask out all of the background and some of the face, leaving 1768 pixels in an oval shape; f) Equalize the intensity histogram; g) Some examples of processed images. at the same time but with a different expression. The training set also includes a further 244 pairs of images that differ only in expression. The ildays test set contains 40 images that come from 20 individuals. Each of these individuals has two images from the same session in the training set and two images taken in a session 4 days later or earlier in the test set. A further 28 individuals were photographed 4 days apart and all 112 of these images are in the training set. The ilmonths test set is just like the ~days test set except that the time between sessions was at least three months and different lighting conditions were present in the two sessions. This set contains 20 images of 10 individuals. A further 36 images of 9 more individuals were included in the training set. The ilglasses test set contains 14 images of 7 different individuals. Each of these individuals has two images in the training set that were taken in another session on the same day. The training and test pairs for an individual differ in that one pair has glasses and the other does not. The training set includes a further 24 images, half with glasses and half without, from 6 more individuals. The images include the whole head, parts of the shoulder, and background. Instead of working with whole images, which contain much irrelevant information, we worked with face images that were normalized as shown in figure 2. Masking out all of the background inevitably looses the contour of the face which contains much discriminative information. The histogram equalization step removes most lighting effects, but it also removes some relevant information like the skin tone. For the best performance, the contour shape and skin tone would have to be used as additional sources of discriminative information. 5 Comparative results We compared RBMrate with four popular face recognition methods. The first and simplest is correlation, which returns the similarity score as the angle between two images represented as vectors of pixel intensities. This performed better than using the Euclidean distance as a score. The second method is eigenfaces [5], which first projects the images onto the principal component subspaces, then returns the similarity score as the angle between the projected images. The third method is fisherfaces [6]. Instead of projecting the images onto the subspace of the principal components, which maximizes the variance .1.expression .1.days 30 30 25 25 ~ e.....20 ~ .e......20 Kl Kl T!! 15 T!! 15 e 10 e 10 CD CD 5 5 0 corr corr 30 100 25 _ 80 ~ ~ .?-20 0 tI) tI) 60 Q) Q) "§ 15 T!! e 10 40 g CD Q) 20 5 0 0 corr eigen fisher oppca RBMrate corr eigen fisher oppca RBMrate Figure 3: Error rates of all methods on all test sets. The bars in each group correspond, from left to right, to the rank-I, rank-2, rank-4, rank-8 and rank-16 error rates. The rank-n error rate is the percentage of test images where the n most similar gallery images are all incorrect. among the projected images, fisherfaces projects the images onto a subspace which, at the same time, maximizes the between individual variances and minimizes the within individual variances in the training set. The final method, which we shall call ()ppca, is proposed by Moghaddam et at [7]. This method models differences between images of the same individual as a PPCA [8, 9], and differences between images of different individuals as another PPCA. Then given a difference of two images, it returns as the similarity score the likelihood ratio of the difference image under the two PPCA models. It was the best performing algorithm in the September 1996 FERET test [10]. For eigenfaces, we used 199 principal components, omitting the first principal component, as we determined manually that it encodes simply for lighting conditions. This improved the recognition performances on all the test sets except for ~exp r ession . We used a subspace of dimension 200 for fisherfaces, while we used 10 and 30 dimensional PPCAs for the within-class and between-class model of c5ppca respectively. These are the same numbers used by Moghaddam et at and gives the best results in our simulations. The number of dimensions or hidden units used by each method was optimized for that particular method for best performance. Figure 3 shows the error rates of all five methods on the test sets. The results were averaged over 10 random partitions of the dataset to improve statistical significance. Correlation and eigenfaces perform poorly on ~expre s s i on, probably because they do not attempt to ignore the within-individual variations, whereas the other methods do. All the models did very poorly on the ~months test set which is unfortunate as this is the test set that is most like real applications. RBMrate performed best on ~expre s s i on, fisherfaces is best on ~days and ~glasses ,while eigenfaces is best on ~months . These results show that RBMrate is competitive with but do not perform better than other methods. Figure 4 shows that after our preprocessing, human observers also have great difficulty with the ~mo nths test set, probably because the task is intrinsically difficult and is made even harder by the loss of contour and skin tone information combined with the misleading oval Figure 4: On the left is a test image from ~mo nths and on the right are the 8 most similar images returned by RBMrate . Most human observers cannot find the correct match within these 8. Figure 5: Example features learned by RBMrate . Each pair of RFs constitutes a feature. Top half: with unconstrained weights; bottom half: with non-negative weight constraints. contour produced by masking out all of the background. 6 Receptive fields learned by RBMrate The top half of figure 5 shows the weights of a few of the hidden units after training. All the units encode global features, probably because the image normalization ensures that there are strong long range correlations in pixel intensities. The maximum size of the weights is 0.01765, with most weights having magnitudes smaller than 0.005. Note, however, that the hidden unit activations range from 0 to 100. On the left are 4 units exhibiting interesting features and on the right are 4 units chosen at random. The top unit of the first column seems to be encoding the presence of mustache in both faces. The bottom unit seems to be coding for prominent right eyebrows in both faces. Note that these are facial features which often remain constant across images of the same individual. In the second column are two features which seem to encode for different facial expressions in the two faces. The right side of the top unit encodes a smile while the left side is expressionless. This is reversed in the bottom unit. So the network has discovered some features which are fairly constant across images in the same class, and some features which can differ substantially within a class. Inspired by [11], we tried to enforce local features by restricting the weights to be nonnegative. This is achieved by resetting negative weights to zero after each weight update. The bottom half of figure 5 shows some of the hidden receptive fields learned. Except for the 4 features on the left, all other features are local and code for features like mouth shape changes (third column) and eyes and cheeks (fourth column). The 4 features on the left are much more global and clearly capture the fact that the direction of the lighting can differfor two images of the same person. Unfortunately, constraining the weights to be non-negative strongly limits the representational power of RBMrate and makes it worse than all the other methods on all the test sets. 7 Conclusions We have introduced a new method for face recognition based on a non-linear generative model. The generative model can be very complex, yet retains the efficiency required for applications. Performance on the FERET database is comparable to popular methods. However, unlike other methods based on linear models, there is plenty of room for further development using prior knowledge to constrain the weights or additional layers of hidden units to model the correlations of feature detector activities. These improvements should translate into improvements in the rate of recognition. Acknowledgements We thank Jonathon Phillips for graciously providing us with the FERET database, the referees for useful comments and the Gatsby Charitable Foundation for funding. References [1] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Technical Report GeNU TR 2000-004, Gatsby Computational Neuroscience Unit, University College London, 2000. [2] P. SmoIensky. Information processing in dynamical systems: Foundations of harmony theory. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations. MIT Press, 1986. [3] J. Pearl. Probabilistic reasoning in intelligent ~ystems: networks of plausible inference. Morgan Kaufmann Publishers, San Mateo CA, 1988. [4] G. E. Hinton and T. J. Sejnowski. Learning and relearning in boltzmann machines. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations. MIT Press, 1986. [5] M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1):71- 86,1991. [6] P. N. Belmumeur, J. P. Hespanha, and D. J. Kriegman. Eigenfaces versus fisherfaces: recognition using class specific linear projection. In European Conference on Computer Vision, 1996. [7] B. Moghaddam, W. Wahid, and A. Pentland. Beyond eigenfaces: probabilistic matching for face recognition. In IEEE International Conference on Automatic Face and Gesture Recognition, 1998. [8] B. Moghaddam and A. Pentland. Probabilistic visual learning for object representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):696--710, 1997. [9] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Technical Report NCRG/97/01O, Neural Computing Research Group, Aston University, 1997. [10] P. J. Phillips, H. Moon, P. Rauss, and S. A. Rizvi. The FERET september 1996 database and evaluation procedure. In International Conference on Audio and Video-based Biometric Person Authentication, 1997. [11] D. Lee and H. S. Seung. Learning the parts of objects by non-negative matrix factorization. Nature, 401, October 1999.
|
2000
|
102
|
1,757
|
Shape Context: A new descriptor for shape matching and object recognition Serge Belongie, Jitendra Malik and Jan Puzicha Department of Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley, CA 94720, USA {sjb, malik,puzicha} @cs.berkeley.edu Abstract We develop an approach to object recognition based on matching shapes and using a resulting measure of similarity in a nearest neighbor classifier. The key algorithmic problem here is that of finding pointwise correspondences between an image shape and a stored prototype shape. We introduce a new shape descriptor, the shape context, which makes this possible, using a simple and robust algorithm. The shape context at a point captures the distribution over relative positions of other shape points and thus summarizes global shape in a rich, local descriptor. We demonstrate that shape contexts greatly simplify recovery of correspondences between points of two given shapes. Once shapes are aligned, shape contexts are used to define a robust score for measuring shape similarity. We have used this score in a nearest-neighbor classifier for recognition of hand written digits as well as 3D objects, using exactly the same distance function. On the benchmark MNIST dataset of handwritten digits, this yields an error rate of 0.63%, outperforming other published techniques. 1 Introduction The last decade has seen increased application of statistical pattern recognition techniques to the problem of object recognition from images. Typically, an image block with n pixels is regarded as an n dimensional feature vector formed by concatenating the brightness values of the pixels. Given this representation, a number of different strategies have been tried, e.g. nearest-neighbor techniques after extracting principal components [15, 13], convolutional neural networks [12], and support vector machines [14, 5]. Impressive performance has been demonstrated on datasets such as digits and faces. A vector of pixel brightness values is a somewhat unsatisfactory representation of an object. Basic invariances e.g. to translation, scale and small amount of rotation must be obtained by suitable pre-processing or by the use of enormous amounts of training data [12]. Instead, we will try to extract "shape", which by definition is required to be invariant under a group of transformations. The problem then becomes that of operationalizing a definition of shape. The literature in computer vision and pattern recognition is full of definitions of shape descriptors and distance measures, ranging from moments and Fourier descriptors to the Hausdorff distance and the medial axis transform. (For a recent overview, see [16].) Most of these approaches suffer from one of two difficulties: (1) Mapping the shape to a small number of numbers, e.g. moments, loses information. Inevitably, this means sacrificing discriminative power. (2) Descriptors restricted to silhouettes and closed curves are of limited applicability. Shape is a much more general concept. Fundamentally, shape is about relative positional information. This has motivated approaches such as [1] who find key points or landmarks, and recognize objects using the spatial arrangements of point sets. However not all objects have distinguished key points (think of a circle for instance), and using key points alone sacrifices the shape information available in smooth portions of object contours. Our approach therefore uses a general representation of shape - a set of points sampled from the contours on the object. Each point is associated with a novel descriptor, the shape context, which describes the coarse arrangement of the rest of the shape with respect to the point. This descriptor will be different for different points on a single shape S; however corresponding (homologous) points on similar shapes Sand S' will tend to have similar shape contexts. Correspondences between the point sets of S and S' can be found by solving a bipartite weighted graph matching problem with edge weights Cij defined by the similarity of the shape contexts of points i and j. Given correspondences, we can effectively calculate the similarity between the shapes S and S'. This similarity measure is then employed in a nearest-neighbor classifier for object recognition. The core of our work is the concept of shape contexts and its use for solving the correspondence problem between two shapes. It can be compared to an alternative framework for matching point sets due to Gold, Rangarajan and collaborators (e.g. [7, 6]). They propose an iterative optimization algorithm to jointly determine point correspondences and underlying image transformations. The cost measure is Euclidean distance between the first point set and a transformed version of the second point set. This formulation leads to a difficult non-convex optimization problem which is solved using deterministic annealing. Another related approach is elastic graph matching [11] which also leads to a difficult stochastic optimization problem. 2 Matching with Shape Contexts In our approach, a shape is represented by a discrete set of points sampled from the internal or external contours on the shape. These can be obtained as locations of edge pixels as found by an edge detector, giving us a set P = {PI, ... ,Pn}, Pi E lR?, of n points. They need not, and typically will not, correspond to key-points such as maxima of curvature or inflection points. We prefer to sample the shape with roughly uniform spacing, though this is also not critical. Fig. 1(a,b) shows sample points for two shapes. For each point Pi on the first shape, we want to find the "best" matching point qj on the second shape. This is a correspondence problem similar to that in stereopsis. Experience there suggests that matching is easier if one uses a rich local descriptor instead of just the brightness at a single pixel or edge location. Rich descriptors reduce the ambiguity in matching. In this paper, we propose a descriptor, the shape context, that could play such a role in shape matching. Consider the set of vectors originating from a point to all other sample points on a shape. These vectors express the configuration of the entire shape relative to the reference point. Obviously, this set of n - 1 vectors is a rich ...... ... .. . . . . . . : ..... .. + ... . ..... (a) (d) Ij) : . . .. , . . .. (b) (c) (e) Figure 1: Shape context computation and matching. (a, b) Sampled edge points of two shapes. (c) Diagram of log-polar histogram bins used in computing the shape contexts. We use 5 bins for log rand 12 bins for (). (d-f) Example shape contexts for reference samples marked by 0,0, <I in (a,b). Each shape context is a log-polar histogram of the coordinates of the rest of the point set measured using the reference point as the origin. (Dark=large value.) Note the visual similarity of the shape contexts for 0 and 0, which were computed for relatively similar points on the two shapes. By contrast, the shape context for <I is quite different. (g) Correspondences found using bipartite matching, with costs defined by the X2 distance between histograms. description, since as n gets large, the representation of the shape becomes exact. The full set of vectors as a shape descriptor is much too detailed since shapes and their sampled representation may vary from one instance to another in a category. We identify the distribution over relative positions as a more robust and compact, yet highly discriminative descriptor. For a point Pi on the shape, we compute a coarse histogram hi of the relative coordinates of the remaining n - 1 points, This histogram is defined to be the shape context of Pi. The descriptor should be more sensitive to differences in nearby pixels. We therefore use a log-polar coordinate system (see Fig. l(c)). All distances are measured in units of a where a is the median distance between the n 2 point pairs in the shape. Note that the construction ensures that global translation or scaling of a shape will not affect the shape contexts. Since shape contexts are extremely rich descriptors, they are inherently tolerant to small perturbations of parts of the shape. While we have no theoretical guarantees here, robustness to small affine transformations, occlusions and presence of outliers is evaluated experimentally in [2]. Modifications to the shape context definition that provide for complete rotation invariance can alos be provided [2]. Consider a point Pi on the first shape and a point qj on the second shape. Let Cij = C (Pi, qj) denote the cost of matching these two points. As shape contexts are distributions represented as histograms, it is natural! to use the X2 test statistic: where hi(k) and hj(k) denote the K-bin normalized histogram at Pi and qj. The cost Cij for matching points can include an additional term based on the local appearance similarity at points Pi and qj. This is particularly useful when we are comparing shapes derived from gray-level images instead of line drawings. For example, one can add a cost based on color or texture similarity, SSD between small gray-scale patches, distance between vectors of filter outputs, similarity of tangent angles, and so on. The choice of this appearance similarity term is application dependent, and is driven by the necessary invariance and robustness requirements, e.g. varying lighting conditions make reliance on gray-scale brightness values risky. Given the set of costs Cij between all pairs of points i on the first shape and j on the second shape we want to minimize the total cost of matching subject to the constraint that the matching be one-to-one. This is an instance of the square assignment (or weighted bipartite matching) problem, which can be solved in O(N3) time using the Hungarian method. In our experiments, we use the more efficient algorithm of [10]. The input is a square cost matrix with entries Cij . The result is a permutation 7r(i) such that the sum Li Ci,lf(i) is minimized. When the number of samples on two shapes is not equal, the cost matrix can be made square by adding "dummy" nodes to each point set with a constant matching cost of Ed. The same technique may also be used even when the sample numbers are equal to allow for robust handling of outliers. In this case, a point will be matched to a "dummy" whenever there is no real match available at smaller cost than Ed. Thus, Ed can be regarded as a threshold parameter for outlier detection. Given a set of sample point correspondences between two shapes, one can proceed to estimate a transformation that maps one shape into the other. For this purpose there are several options; perhaps most common is the affine model. In this work, we use the thin plate spline (TPS) model, which is commonly used for representing flexible coordinate transformations [17, 6]. Bookstein [4], for example, found it to be highly effective for modeling changes in biological forms. The thin plate spline is the 2D generalization of the cubic spline, and in its regularized form, includes affine transformations as a limiting case. Our complete matching algorithm is obtained by alternating between the steps of recovering correspondences and estimating transformations. We usually employ a fixed number of iterations, typically three in large scale experiments, but more refined schemes are possible. However, experimental experiences show that the algorithmic performance is independent of the details. More details may be found in [2]. As far as we are aware, the shape context descriptor and its use for matching 2D shapes is novel. A related idea in past work is that due to Johnson and Hebert [9] in their work on range images. They introduced a representation for matching dense clouds of oriented 3D points called the "spin image" . A spin image is a 2D histogram formed by spinning a plane around a normal vector on the surface of the object and counting the points that fall inside bins in the plane. 1 Alternatives include Bickel's generalization of the Kolmogorov-Smirnov test for 2D distributions [3], which does not require binning. 0 . 3,-----~-~--~-~r=====:===il 1 _-- sso SO 0.25 0.2 0.15 L 0.1 L ~I- - - ____ ~ ____ _ 0.05 L - - - :E- _ - - _ _ _ - -:I: 00 2000 4000 6000 BOOO 10000 size of training set 0.06 ,-------------r=~===cK;= = 1=jl - e - K::::3 0.0 - < K=5 0.04 0.03 0.02 0.01 103 104 size of training set Figure 2: Handwritten digit recognition on the MNIST dataset. Left: Test set errors of a 1-NN classifier using SSD and Shape Distance (SD) measures. Right: Detail of performance curve for Shape Distance, including results with training set sizes of 15,000 and 20,000. Results are shown on a semilog-x scale for K = 1, 3, 5 nearest neighbors. 3 Classification using Shape Context matching Matching shapes enables us to define distances between shapes; given such a distance measure a straightforward strategy for recognition is to use a K -NN classifier. In the following two case studies we used 100 point samples selected from the Canny edges of each image. We employed a regularized TPS transformation model and used 3 iterations of shape context matching and TPS re-estimation. After matching, we estimated shape distances as the weighted sum of three terms: shape context distance, image appearance distance and bending energy. We measure shape context distance between shapes P and Q as the symmetric sum of shape context matching costs over best matching points, i.e . .!. L argminC (p,T(q)) + ~ L argminC (p,T(q)) n P qEQ m Q PEP pE qE Dsc (P, Q) = (1) where T(·) denotes the estimated TPS shape transformation. We use a term Dac (P, Q) for appearance cost, defined as the sum of squared brightness differences in Gaussian windows around corresponding image points. This score is computed after the thin plate spline transformation T has been applied to best warp the images into alignment. The third term Dbe (P, Q) corresponds to the 'amount' of transformation necessary to align the shapes. In the TPS case the bending energy is a natural measure (see [4, 2]). Case study 1: Digit recognition Here we present results on the MNIST dataset of handwritten digits, which consists of 60,000 training and 10,000 test digits [12]. Nearest neighbor classifiers have the property that as the number of examples n in the training set goes to infinity, the I-NN error converges to a value ~ 2E*, where E* is the Bayes Risk (for K-NN, K -+ 00 and K/n -+ 0, the error -+ E*). However, what matters in practice is the performance for small n, and this gives us a way to compare different similarity/distance measures. In Fig. 2, our shape distance is compared to SSD (sum of squared differences between pixel brightness values). On the MNIST dataset nearly 30 algorithms have been compared (http://www. research.att.com/ ,,-,yann/exdb/mnist/index.html). The lowest test set error rate published at this time is 0.7% for a boosted LeNet-4 with a training set of size O.4 ~:;;-~-~-~----C=-+-C==::s= so ::=====jl 0.35 -e- so """*- SOrota 0.3 0.25 0.2 0.15 0.1 0.05 %~-~ 2 -~4-~6~~8~~1~0~~ 12 average no. of prototypes per object (a) (b) Figure 3: 3D object recognition. (a) Comparison of test set error for SSD, Shape Distance (SD), and Shape Distance with K-medoid prototypes (SD-proto) vs. number of prototype views. For SSD and SD, we varied the number of prototypes uniformly for all objects. For SD-proto, the number of prototypes per object depended on the within-object variation as well as the between-object similarity. (b) K-medoid prototype views for two different examples, using an average of 4 prototypes per object. 60,000 X 10 synthetic distortions per training digit. Our error rate using 20,000 training examples and 3-NN is 0.63%. Case study 2: 3D object recognition Our next experiment involves the 20 common household objects from the COIL-20 database [13]. We prepared our training sets by selecting a number of equally spaced views for each object and using the remaining views for testing. The matching algorithm is exactly the same as for digits. Fig. 3(a) shows the performance using 1-NN on the weighted shape distance compared to a straightforward sum of squared differences (SSD). SSD performs very well on this easy database due to the lack of variation in lighting [8]. Since the objects in the COIL-20 database have differing variability with respect to viewing angle, it is natural to ask whether prototypes can be allocated more efficiently. We have developed a novel editing algorithm based on shape distance and K-medoid clustering. K-medoids can be seen as a variant of K-means that restricts prototype positions to data points. First a matrix of pairwise similarities between all possible prototypes is computed. For a given number of K prototypes the K -medoid algorithm then iterates two steps: (i) For a given assignment of points to (abstract) clusters a prototype is selected by minimizing the average distance of the prototype to all elements in the cluster, and (ii) given the set of prototypes, points are then reassigned to clusters according to the nearest prototype. The number of prototypes is selected by a greedy splitting strategy starting from one prototype per category. We choose the cluster to split based on the associated overall misclassification error. This continues until the overall misclassification error has dropped below a criterion level. The editing algorithm is illustrated in Fig. 3(b). As seen, more prototypes are allocated to categories with high within class variability. The curve marked SDproto in Fig. 3 shows the improved classification performance using this prototype selection strategy instead of equally-spaced views. Note that we obtain a 2.4% error rate with an average of only 4 two-dimensional views for each three-dimensional object, thanks to the flexibility provided by the matching algorithm. 4 Conclusion We have presented a new approach to computing shape similarity and correspondences based on the shape context descriptor. Appealing features of our approach are its simplicity and robustness. The standard invariances are built in for free, and as a consequence we developed a classifier that is highly effective even when only a small number of training examples are available. Acknowledgments This research is supported by (ARO) DAAH04-96-1-0341, the Digital Library Grant IRl-9411334, an NSF graduate Fellowship for S.B and the German Research Foundation (DFG) by Emmy Noether grant PU-165/1. References [1] Y. Amit, D. Geman, and K. Wilder. Joint induction of shape features and tree classifiers. IEEE Trans. PAMI, 19(11):1300- 1305, November 1997. [2] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape contexts. Technical report, UC Berkeley, January 200l. [3] P. J. Bickel. A distribution free version of the Smirnov two-sample test in the multivariate case. Annals of Mathematical Statistics, 40:1-23, 1969. [4] F. L. Bookstein. Principal warps: thin-plate splines and decomposition of deformations. IEEE Trans. PAMI, 11(6):567-585, June 1989. [5] C. Burges and B. SchOikopf. Improving the accuracy and speed of support vector machines. In NIPS, pages 375- 381, 1997. [6] H. Chui and A. Rangarajan. A new algorithm for non-rigid point matching. In CVPR, volume 2, pages 44-51, June 2000. [7] S. Gold, A. Rangarajan, C-P. Lu, S. Pappu, and E. Mjolsness. New algorithms for 2D and 3D point matching: pose estimation and correspondence. Pattern Recognition, 31(8), 1998. [8] D.P. Huttenlocher, R. Lilien, and C. Olson. View-based recognition using an eigenspace approximation to the Hausdorff measure. PAMI, 21(9):951-955, Sept. 1999. [9] Andrew E. Johnson and Martial Hebert. Recognizing objects by matching oriented points. In CVPR, pages 684- 689, 1997. [10] R. Jonker and A. Volgenant. A shortest augmenting path algorithm for dense and sparse linear assignment problems. Computing, 38:325-340, 1987. [11] M. Lades, C.C. Vorbriiggen, J. Buhmann, J. Lange, C. von der Malsburg, R.P. Wurtz, and W. Konen. Distortion invariant object recognition in the dynamic link architecture. IEEE Trans. Computers, 42(3):300-311, March 1993. [12] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, November 1998. [13] H. Murase and S.K. Nayar. Visual learning and recognition of 3-D objects from appearance. Int. Journal of Computer Vision, 14(1):5- 24, Jan. 1995. [14] M. Oren, C. Papageorgiou, P. Sinha, E. Osuna, and T. Poggio. Pedestrian detection using wavelet templates. In CVPR, pages 193- 199, Puerto Rico, June 1997. [15] M. Turk and A.P. Pentland. Eigenfaces for recognition. J. Cognitive Neuroscience, 3(1):71- 96, 1991. [16] R. C. Veltkamp and M. Hagedoorn. State of the art in shape matching. Technical Report UU-CS-1999-27, Utrecht, 1999. [17] G. Wahba. Spline Models for Observational Data. SIAM, 1990.
|
2000
|
103
|
1,758
|
Position Variance, Recurrence and Perceptual Learning Zhaoping Li Peter Dayan Gatsby Computational Neuroscience Unit 17 Queen Square, London, England, WCIN 3AR. zhaoping@gat sby.ucl. a c.uk da ya n@gat sby.ucl. ac .uk Abstract Stimulus arrays are inevitably presented at different positions on the retina in visual tasks, even those that nominally require fixation. In particular, this applies to many perceptual learning tasks. We show that perceptual inference or discrimination in the face of positional variance has a structurally different quality from inference about fixed position stimuli, involving a particular, quadratic, non-linearity rather than a purely linear discrimination. We show the advantage taking this non-linearity into account has for discrimination, and suggest it as a role for recurrent connections in area VI, by demonstrating the superior discrimination performance of a recurrent network. We propose that learning the feedforward and recurrent neural connections for these tasks corresponds to the fast and slow components of learning observed in perceptual learning tasks. 1 Introduction The field of perceptual learning in simple, but high precision, visual tasks (such as vernier acuity tasks) has produced many surprising results whose import for models has yet to be fully felt. A core of results is that there are two stages of learning, one fast, which happens over the first few trials, and another slow, which happens over multiple sessions, may involve REM sleep, and can last for months or even years (Fable, 1994; Karni & Sagi, 1993; Fahle, Edelman, & Poggio 1995). Learning is surprisingly specific, in some cases being tied to the eye of origin of the input and rarely admitting generalisation across wide areas of space or between tasks that appear extremely similar, even involving the same early-stage detectors (eg Fahle, Edelman, & Poggio 1995; Fable, 1994). For instance, improvement through learning on an orientation discrimination task does not lead to improvement on a vernier acuity task (Fable 1997), even though both tasks presumably use the same orientation selective striate cortical cells to process inputs. Of course, learning in human psychophysics is likely to involve plasticity in a large number of different parts of the brain over various timescales. Previous studies (Poggio, Fable, & Edelman 1992, Weiss, Edelman, & Fable 1993) proposed phenomenological models of learning in a feedforward network architecture. In these models, the first stage units in the network receive the sensory inputs through the medium of basis functions relevant for the perceptual task. Over learning, a set of feedforward weights is acquired such that the weighted sum of the activities from the input units can be used to make an appropriate binary decision, eg using a threshold. These models can account for some, but not all, observations on perceptual learning (Fable et al 1995). Since the activity of VI units seems not to relate directly to behavioral decisions on these visual tasks, the feedforward connections A --=---------:-----~ X -l+y y+E l+y x Figure 1: Mid-point discrimination. A) Three bars are presented at x-, Xo and x+. The task is to report which of the outer bars is closer to the central bar. y represents the variable placement of the stimulus array. B) Population activities in cortical cells evoked by the stimulus bars the activities ai is plotted against the preferred location Xi of the cells. This comes from Gaussian tuning curves (k = 20; T = 0.1) and Poisson noise. There are 81 units whose preferred values are placed at regular intervals of ~x = 0.05 between X = -2 and x = 2. must model processing beyond VI. The lack of generalisation between tasks that involve the same visual feature samplers suggests that the basis functions, eg the orientation selective primary cortical cells that sample the inputs, do not change their sensitivity and shapes, eg their orientation selectivity or tuning widths. However, evidence such as the specificity of learning to the eye of origin and spatial location strongly suggest that lower visual areas such as VI are directly involved in learning. Indeed, VI is a visual processor of quite some computational power (performing tasks such as segmentation, contour-integration, pop-out, noise removal) rather than being just a feedforward, linear, processing stage (eg Li, 1999; Pouget et aII998). Here, we study a paradigmatic perceptual task from a statistical perspective. Rather than suggest particular learning rules, we seek to understand what it is about the structure of the task that might lead to two phases of learning (fast and slow), and thus what computational job might be ascribed to VI processing, in particular, the role of lateral recurrent connections. We agree with the general consensus that fast learning involves the feedforward connections. However, by considering positional invariance for discrimination, we show that there is an inherently non-linear component to the overall task, which defeats feedforward algorithms. 2 The bisection task Figure IA shows the bisection task. Three bars are presented at horizontal positions Xo = y + E, x_ = -1 + Y and x+ = 1 + y, where -1 « E « 1. Here y is a nuisance random number with zero mean, reflecting the variability in the position of stimulus array due to eye movements or other uncontrolled factors. The task for the subject is to report which of the outer bars is closer to the central bar, ie to report whether E is greater than or less than O. The bars create a population-coded representation in VI cells preferring vertical orientation. In figure IB, we show the activity of cells ai as a function of preferred topographic location Xi of the cell; and, for simplicity, we ignore activities from other VI cells which prefer orientations other than vertical. We assume that the cortical response to the bars is additive, with mean ai(E,y) = f(Xi - xo) + f(Xi - x_) + f(Xi - x+) (1) (we often drop the dependence on E, y and write ai, or, for all the components, a) where f is, say, a Gaussian, tuning curve with height k and tuning width T, f(x) = ke- x2 /2T2, usually with T « 1. The net activity is ai = ai + ni, where ni is a noise term. We assume that ni comes from a Poisson distribution and is independent across the units, and E and y have mean zero and are uniformly distributed in their respective ranges. The subject must report whether E is greater or less than 0 on the basis of the activities a. A normative way to do this is to calculate the probability P[Ela] of E given a, and report by maximum likelihood (ML) that E > 0 if 1,>0 dE P[Ela] > 0.5. Without prior information about E, y, and with Poisson noise ni = ai - iii, we have 3 Fixed position stimulus array When the stimulus array is in a fixed position y = 0, analysis is easy, and is very similar to that carried out by Seung & Sompolinsky (1993). Dropping y, we calculate log P[Ela] and approximate it by Taylor expansion about E = 0 to second order in E: 10gP[aIE] .-vconstant+E t,logP[aIE]I,=o+ ~t-IogP[aIE]I,=o (3) ignoring higher order terms. Provided that the last term is negative (which it indeed is, almost surely), we derive an approximately Gaussian distribution (4) with variance u; = [-t-IogP[aIE]I,=o]-l and mean £= u; t, logP[aIE]IE=o. Thus the subject should report that E > 0 or E < 0 if the test t(a) = t, 10gP[aIE]I,=0 is greater or less than zero respectively. For the Poisson noise case we consider, log P[aIE] = constant+ l:i ai log iii ( E) since l:i iii (E) is a constant, independent of E. Thus, (5) Therefore, maximum likelihood discrimination can be implemented by a linear feedforward network mapping inputs ai through feedforward weights Wi = t, log iii to calculate as the output t(a) = l:i Wiai . A threshold of 0 on t(a) provides the discrimination E > 0 if t(a) > 0 and E < 0 for t(a) < O. The task therefore has an essentially linear character. Note that if the noise corrupting the activities is Gaussian, the weights should instead be a Wi = alai. Figure 2A shows the optimal discrimination weights for the case of independent Poisson noise. The lower solid line in figure 2C shows optimal performance as a function of Eo The error rate drops precipitately from 50% for very small (and thus difficult) E to almost 0, long before E approaches the tuning width T. It is also possible to learn weights in a variety of ways (eg Poggio, Fable & Edelman, 1992; Weiss, Edelman & Fable, 1993; Fable, Edelman & Poggio 1995;) Figure 2B shows discrimination weights learned using a simple error-correcting learning procedure, which are almost the same as the optimal weights and lead to performance that is essentially optimal (the lower dashed line in figure 2C) . We use error-correcting learning as a comparison technique below. 4 Moveable stimulus array If the stimulus array can move around, ie if y is not necessarily 0, then the discrimination task gets considerably harder. The upper dotted line in figure 2C shows the (rather unfair) test of using the learned weights in figure 2B when y E [-.2, .2] varies uniformly. Clearly this has a highly detrimental effect on the quality of discrimination. Looking at the weight structure in figure 2A;B suggests an obvious reason for this - the weights associated with the outer bars are zero since they provide no information about E when y = 0, and the ML weights learned weights -2 2 X -2 o 2 X Vl ..... o ..... ..... Q) performance OL-------'-"'"-------::-. 1) E 0.05 Figure 2: A) The ML optimal discrimination weights w = f. log a (plotted as Wi vs. Xi) for deciding if € > 0 when y = O. B) The learned discrimination weights w for the same decision. During on line learning, random examples were selected with € E -2[-r, r] uniformly, r = 0.1, and the weights were adjusted online to maximise the log probability of generating the correct discrimination under a model in which the probability of declaring that € > 0 is O'(~i Wiai) = 1/(1 + exp( - ~i Wiai)). C) Performance of the networks with ML (lower solid line) and learned (lower dashed line) weights as a function of €. Performance is measured by drawing a randomly given € and y, and assessing the %'age of trials the answer is incorrect. The upper dotted line shows the effect of drawing y E [-0.2,0.2] uniformly, yet using the ML weights in (B) that assume y = O. weights are finely balanced about 0, the mid-point of the outer bars, giving an unbiased or balanced discrimination on E. If the whole array can move, this balance will be destroyed, and all the above conclusions change. The equivalent of equation (3) when y f:. 0 is Thus, to second-order, a Gaussian distribution can approximate P[ E, Y I a]. Figure 3A shows the high quality of this approximation. Here, E and yare anti-correlated given activities a, because the information from the center stimulus bar only constrains their sum E + y. Of interest is the probability P[Ela] = f dy log prE, yla], which is approximately Gaussian with mean (3 p; and variance p;, where, under Poisson noise ni = ai - ai, (3 = [a· 810gB _ (a. 8210gB)(a. 810g B)j(a . 8210g B)]1 _ 8e 8y8e 8y 8y2 e,y-O P-2 = [(a. 82IogB)2j(a. 8210gB) _ a. 8210gB]1 _ e 8y8e 8y2 8e2 e,y-O Since -a . 82d~~ B (which is the inverse variance of the Gaussian distribution of y that we integrated out) is positive, the appropriate test for the sign of E is t(a) = [(a. 82IogB)(a. 810gB) _ (a . 8IogB)(a. 8210~B)]1 _ 8y8e 8y 8e 8y e,y-O (6) If t( a) > 0 then we should report E > 0, and conversely. Interestingly, t( a) is a very simple quadratic form t(a) = a . Q . a == " .. a.a . [( 82 log iii) (8 log iii ) _ (8 log iii) (82 lo~ iii)] I _ 0tJ t J 8y8e 8y 8e 8y e,y-O (7) Therefore, the discrimination problem in the face of positional variance has a precisely quantifiable non-linear character. The quadratic test t(a) cannot be implemented by a linear feedforward architecture only, since the optimal boundary t(a) = 0 to separate the state space a for a decision is now curved. Writing t(a) = a· Q . a where the symmetric A B C - 1 exact 'W: ,Wd o. 0.1 0.05 -0.05 -0 ,+ ,+ 0.1 -0.02 0.02 £ 0.08 -0.02 0.02 £ 0.08 -2 -2 Figure 3: Varying y. A) Posterior distribution prE, yla]. Exact (left) prE, yla] for a particular a with true values E = 0.27, Y = 1.57 (with 7 = 0.1) and its bivariate Gaussian approximation (right). Only the relevant region of (E, y) space is shown - outside this, the probability mass is essentially 0 (and the contour values are the same). B) The quadratic form Q, Qij vs. X i and Xj. C) The four eigenvectors of Q with non-zero eigenvalues (shown above). The eigenvalues come in ± pairs; the associated eigenvectors come in antisymmetric pairs. The absolute scale of Q and its eigenvalues is arbitrary. A ML errors B c linear/ ML errors Figure 4: y =1= O. A) Performance of the approximate inference based on the quadratic form of figure 3B in terms of % 'age error as a function of Iyl and lEI (7 = 0.1). B) Feedforward weights, Wi vs. Xi , learned using the same procedure as in figure 2B, but with y E [-.2, .2] chosen uniformly at random. C) Ratio of error rates for the linear (weights from B) to the quadratic discrimination. Values that would be infinite are pegged at 20. form Qij = (Q~j + Qji)/2, we find Q only has four non-zero eigenvalues, for the 4d· . I b d b 4 0 2 log a I 0 log a I 0 log a I d ImenSlOna su -space spanne y vectors OYOf f ,y=O, oy f,y=O, Of f,y=O, an 02d~~ a ky=o. Q and its eigenvectors and eigenvalues are shown in Figure 3B;C. Note that if Gaussian rather than Poisson noise is used for ni = ai - ai, the test t( a) is still quadratic. Using t(a) to infer E is sound for y up to two standard deviations (7) of the tuning curve f(x) away from 0, as shown in Figure 4A. By comparison, a feedforward network, of weights shown in figure 4B and learned using the same error-correcting learning procedure as above, gives substantially worse performance, even though it is better than the feedforward net of Figure 2A;B. Figure 4C shows the ratio of the error rates for the linear to the quadratic decisions. The linear network is often dramatically worse, because it fails to take proper account of y. We originally suggested that recurrent interactions in the form of horizontal intra-cortical connections within VI might be the site of the longer term improvement in behavior. Figure 5 demonstrates the plausibility of this idea. Input activity (as in figure IB) is used to initialise the state u at time t = 0 of a recurrent network. The recurrent weights are A B recurrent weights C recu rrent error o lin/rec error Decision y Input a Figure 5: Threshold linear recurrent network, its weights, and performance. See text. symmetric and shown in figure 5B. The network activities evolve according to dui/dt = -Ui + Lj Jijg(Uj) + ai (8) where Jij are the recurrent weight from unit j to i, g(u) = U if U > 0 and g(u) = 0 for U :::; O. The network activities finally settle to an equilibrium u(t -+ 00) (note that Ui (t -+ 00) = ai when J = 0). The activity values u( t -+ 00) of this equilibrium are fed through feed forward weights w, that are trained for this recurrent network just as for the pure feedforward case, to reach a decision Li WiUi(t -+ 00). Figure 5C shows that using this network gives results that are almost invariant to y (as for the quadratic discriminator); and figure 5D shows that it generally outperforms the optimal linear discriminator by a large margin, albeit performing slightly worse than the quadratic form. The recurrent weight matrix is subject to three influences: (1) a short range interaction Jij for IXi - Xj I ;S T to stablize activities ai induced by a single bar in the input; (2) a longer range interaction Jij for IXi - Xj I '" 1 to mediate interaction between neighboring stimulus bars, amplifying the effects of the displacement signal £, and (3) a slight local interaction Jij for lXii, IXj I ;S T. The first two interaction components are translation invariant in the spatial range of Xi, Xj E [-2,2] where the stimulus array appears, in order to accommodate the positional variance in y. The last component is not translation invariant and counters variations in y. 5 Discussion The problem of position invariant discrimination is common to many perceptual learning tasks, including hyper-acuity tasks such as the standard line vernier, three-dot vernier, curvature vernier, and orientation vernier tasks (Fahle et al 1995, Fahle 1997). Hence, the issues we address and analyze here are of general relevance. In particular, our mathematical formulation, derivations, and thus conclusions, are general and do not depend on any particular aspect of the bisection task. One essential problem in many of these tasks is to discriminate a stimulus variable £ that depends only on the relative positions between the stimulus features, while the absolute position y of the whole stimulus array can vary between trials by an amount that is much larger than the discrimination threshold (or acuity) on £. The positional variable y may not have to correspond to the absolute position of the stimulus array, but merely to the error in the estimation of the absolute position of the stimulus by other neural areas. Our study suggests that although when y = 0 is fixed, the discrimination is easy and soluble by a linear, feedforward network, whose weights can be learnt in a straight-forward manner, when y is not fixed, optimal discrimination of £ is based on an approximately quadratic function of the input activities, which cannot be implemented using a linear feedforward net. We also showed that a non-linear recurrent network, which is a close relative of a line attractor network, can perform much better than a pure feedforward network on the bisection task in the face of position variance. There is experimental evidence that lateral connections within VI change after learning the bisection task (Gilbert 2000), although we have yet to construct an appropriate learning rule. We suggest that learning the recurrent weights for the nonlinear transform corresponds to the slow component in perceptual learning, while learning the feedforward weights corresponds to the fast component. The desired recurrent weights are expected to be much more difficult to learn, in the face of nonlinear transforms and (the easily unstable) recurrent dynamics. Further, the feedforward weights need to be adjusted further as the recurrent weights change the activities on which they work. The precise recurrent interactions in our network are very specific to the task and its parameters. In particular, the range of the interactions is completely determined by the scale of spacing between stimulus bars; and the distance-dependent excitation and inhibition in the recurrent weights is determined by the nature of the bisection task. This may be why there is little transfer of learning between tasks, when the nature and the spatial scale of the task change, even if the same input units are involved. However, our recurrent interaction model does predict that transfer is likely when the spacing between the two outer bars (here at ~x = 2) changes by a small fraction. Further, since the signs of the recurrent synapses change drastically with the distance between the interacting cells, negative transfer is likely between two bisection tasks of slightly different spatial scales. We are planning to test this prediction. Achieving selectivity at the same time as translation invariance is a very basic requirement for position-invariant object recognition (see Riesenhuber & Poggio 1999 for a recent discussion), and arises in a pure form in this bisection task. Note, for instance, that trying to cope with different values of y by averaging spatially shifted versions of the optimal weights for y = 0 (figure 2A) would be hopeless, since this would erase (or at very least blur) the precise spatial positioning of the peaks and troughs which underlies the discrimination power. It would be possible to scan the input for the value of y that fits the best and then apply the discriminator centered about that value, and, indeed, this is conceptually what the neocognitron (Fukushima 1980) and the MAX-model (Riesenhuber & Poggio 1999) do using layers of linear and non-linear combination. In our case, we have shown, at least for fairly small y, that the optimal non-linearity for the task is a simple quadratic. Acknowledgements Funding is from the Gatsby Charitable Foundation. We are very grateful to Shimon Edelman, Manfred Fable and Maneesh Sahani for discussions. References [1] Karni A and Sagi D. Nature 365 250-252,1993. [2] Fahle M. Edelman S. and Poggio T. Vision Res. 35 3003-3013, 1995. [3] Fahle M. Perception 23 411-427, (1994). And also Fahle M. Vis. Res. 37(14) 1885-1895, (1997). [4] Poggio T. Fahle M. and Edelman S. Science 2561018-1021, 1992. [5] Weiss Y. Edelman S. and Fahle M. Neural Computation 5 695-718, 1993. [6] Li, Zhaoping Network: Computation in Neural Systems 10(2) 187-212, 1999. [7] Pouget A, Zhang K, Deneve S, Latham PE. Neural Comput. 10(2):373-401, 1998. [8] Seung HS, Sompolinsky H. Proc Natl Acad Sci USA . 90(22):10749-53, 1993 [9] Koch C. Biophysics of computation. Oxford University Press, 1999. [10] Gilbert C. Presentation at the Neural Dynamics Workshop, Gatsby Unit, 2/2000. [11] Riesenhuber M, Poggio T. Nat Neurosci. 2(11):1019-25, 1999. [12] Fukushima, K. BioI. Cybem. 36193-202, 1980.
|
2000
|
104
|
1,759
|
Permitted and Forbidden Sets in Symmetric Threshold-Linear Networks Richard H.R. Hahnloser and H. Sebastian Seung Dept. of Brain & Cog. Sci., MIT Cambridge, MA 02139 USA rh~ai.mit.edu, seung~mit.edu Abstract Ascribing computational principles to neural feedback circuits is an important problem in theoretical neuroscience. We study symmetric threshold-linear networks and derive stability results that go beyond the insights that can be gained from Lyapunov theory or energy functions. By applying linear analysis to subnetworks composed of coactive neurons, we determine the stability of potential steady states. We find that stability depends on two types of eigenmodes. One type determines global stability and the other type determines whether or not multistability is possible. We can prove the equivalence of our stability criteria with criteria taken from quadratic programming. Also, we show that there are permitted sets of neurons that can be coactive at a steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we can provide a formulation of longterm memory that is more general than the traditional perspective of fixed point attractor networks. A Lyapunov-function can be used to prove that a given set of differential equations is convergent. For example, if a neural network possesses a Lyapunov-function, then for almost any initial condition, the outputs of the neurons converge to a stable steady state. In the past, this stability-property was used to construct attractor networks that associatively recall memorized patterns. Lyapunov theory applies mainly to symmetric networks in which neurons have monotonic activation functions [1, 2]. Here we show that the restriction of activation functions to threshold-linear ones is not a mere limitation, but can yield new insights into the computational behavior of recurrent networks (for completeness, see also [3]). We present three main theorems about the neural responses to constant inputs. The first theorem provides necessary and sufficient conditions on the synaptic weight matrix for the existence of a globally asymptotically stable set of fixed points. These conditions can be expressed in terms of copositivity, a concept from quadratic programming and linear complementarity theory. Alternatively, they can be expressed in terms of certain eigenvalues and eigenvectors of submatrices of the synaptic weight matrix, making a connection to linear systems theory. The theorem guarantees that the network will produce a steady state response to any constant input. We regard this response as the computational output of the network, and its characterization is the topic of the second and third theorems. In the second theorem, we introduce the idea of permitted and forbidden sets. Under certain conditions on the synaptic weight matrix, we show that there exist sets of neurons that are "forbidden" by the recurrent synaptic connections from being coactivated at a stable steady state, no matter what input is applied. Other sets are "permitted," in the sense that they can be coactivated for some input. The same conditions on the synaptic weight matrix also lead to conditional multistability, meaning that there exists an input for which there is more than one stable steady state. In other words, forbidden sets and conditional multistability are inseparable concepts. The existence of permitted and forbidden sets suggests a new way of thinking about memory in neural networks. When an input is applied, the network must select a set of active neurons, and this selection is constrained to be one of the permitted sets. Therefore the permitted sets can be regarded as memories stored in the synaptic connections. Our third theorem states that there are constraints on the groups of permitted and forbidden sets that can be stored by a network. No matter which learning algorithm is used to store memories, active neurons cannot arbitrarily be divided into permitted and forbidden sets, because subsets of permitted sets have to be permitted and supersets of forbidden sets have to be forbidden. 1 Basic definitions Our theory is applicable to the network dynamics [ 1 + dx· -' + x · = b· + "W··x · dt ' 'L...J 'J J j (1) where [u]+ = maxi u, O} is a rectification nonlinearity and the synaptic weight matrix is symmetric, Wij = Wji . The dynamics can also be written in a more compact matrix-vector form as :i; + x = [b + W x]+. The state of the network is x. An input to the network is an arbitrary vector b. An output of the network is a steady state;!;. in response to b. The existence of outputs and their relationship to the input are determined by the synaptic weight matrix W. A vector v is said to be nonnegative, v ~ 0, if all of its components are nonnegative. The nonnegative orthant {v : v ~ O} is the set of all nonnegative vectors. It can be shown that any trajectory starting in the nonnegative orthant remains in the nonnegative orthant. Therefore, for simplicity we will consider initial conditions that are confined to the nonnegative orthant x ~ O. 2 Global asymptotic stability Definition 1 A steady state;!;. is stable if for all initial conditions sufficiently close to ;!;., the state trajectory remains close to ;!;. for all later times. A steady state is asymptotically stable if for all initial conditions sufficiently close to ;!;., the state trajectory converges to ;!;.. A set of steady states is globally asymptotically stable if from almost all initial conditions, state trajectories converge to one of the steady states. Exceptions are of measure zero. Definition 2 A principal submatrix A of a square matrix B is a square matrix that is constructed by deleting a certain set of rows and the corresponding columns of B. The following theorem establishes necessary and sufficient conditions on W for global asymptotic stability. Theorem 1 If W is symmetric, then the following conditions are equivalent: 1. All nonnegative eigenvectors of all principal submatrices of I - W have positive eigenvalues. 2. The matrix 1-W is copositive. That is, xT (I - W)x > 0 for all nonnegative x, except x = O. 3. For all b, the network has a nonempty set of steady states that are globally asymptotically stable. Proof sketch: • (1) ~ (2). Let v* be the minimum of vT(I - W)v over nonnegative v on the unit sphere. If (2) is false, the minimum value is less than or equal to zero. It follows from Lagrange multiplier methods that the nonzero elements of v* comprise a nonnegative eigenvector of the corresponding principal submatrix of W with eigenvalue greater than or equal to unity. • (2) ~ (3). By the copositivity off - W, the function L = ~xT (I - W)x-bT X is lower bounded and radially unbounded. It is also nonincreasing under the network dynamics in the nonnegative orthant, and constant only at steady states. By the Lyapunov stability theorem, the stable steady states are globally asymptotically stable. In the language of optimization theory, the network dynamics converges to a local minimum of L subject to the nonnegativity constraint x ~ O. • (3) ~ (1). Suppose that (1) is false. Then there exists a nonnegative eigenvector of a principal submatrix of W with eigenvalue greater than or equal to unity. This can be used to construct an unbounded trajectory of the dynamics .• The meaning of these stability conditions is best appreciated by comparing with the analogous conditions for the purely linear network obtained by dropping the rectification from (1). In a linear network, all eigenvalues of W would have to be smaller than unity to ensure asymptotic stability. Here only nonnegative eigenvectors are able to grow without bound, due to the rectification, so that only their eigenvalues must be less than unity. All principal submatrices of W must be considered, because different sets of feedback connections are active, depending on the set of neurons that are above threshold. In a linear network, I - W would have to be positive definite to ensure asymptotic stability, but because of the rectification, here this condition is replaced by the weaker condition of copositivity. The conditions of Theorem 1 for global asymptotic stability depend only on W, but not on b. On the other hand, steady states do depend on b. The next lemma says that the mapping from input to output is surjective. Lemma 1 For any nonnegative vector v 2:: 0 there exists an input b, such that v is a steady state of equation 1 with input b. Proof: Define c = v-1::W1::v, where 1:: = diag(rTl, ... ,rTN) and rTi = 1 if Vi > 0 and rTi = 0 if Vi = O. Choose bi = Ci for Vi > 0 and bi = -1 - (1::W1::V)i for Vi = 0 .• This Lemma states that any nonnegative vector can be realized as a fixed point. Sometimes this fixed point is stable, such as in networks subject to Theorem 1 in which only a single neuron is active. Indeed, the principal submatrix of I - W corresponding to a single active neuron corresponds to a diagonal elements, which according to (1) must be positive. Hence it is always possible to activate only a single neuron at an asymptotically stable fixed point. However, as will become clear from the following Theorem, not all nonnegative vectors can be realized as an asymptotically stable fixed point. 3 Forbidden and permitted sets The following characterizations of stable steady states are based on the interlacing Theorem [4]. This Theorem says that if A is an - 1 by n - 1 principal submatrix of a n by n symmetric matrix B, then the eigenvalues of A fall in between the eigenvalues of B. In particular, the largest eigenvalue of A is always smaller than the largest eigenvalue of B. Definition 3 A set of neurons is permitted if the neurons can be coactivated at an asymptotically stable steady state for some input b. On the other hand, a set of neurons is forbidden, if they cannot be coactivated at an asymptotically stable steady state no matter what the input b. Alternatively, we might have defined a permitted set as a set for which the corresponding square sub-matrix of I - W has only positive eigenvalues. And, similarly, a forbidden set could be defined as a set for which there is at least one non-positive eigenvalue. It follows from Theorem 1 that if the matrix I - W is copositive, then the eigenvectors corresponding to non-positive eigenvalues of forbidden sets have to have both positive and non-positive components. Theorem 2 If the matrix I - W is copositive, then the following statements are equivalent: 1. The matrix I - W is not positive definite. 2. There exists a forbidden set. 3. The network is conditionally multistable. That is, there exists an input b such that there is more than one stable steady state. Proof sketch: • (1) => (2). I - W is not positive definite and so there can be no asymptotically stable steady state in which all neurons are active, e.g. the set of all neurons is forbidden . • (2) => (3). Denote the forbidden set with k active neurons by 1::. Without loss of generality, assume that the principal submatrix of I - W corresponding to 1:: has k - 1 positive eigenvalues and only one non-positive eigenvalue (by virtue of the interlacing theorem and the fact that the diagonal elements of I - W must be positive, there is always a subset of 1::, for which this is true). By choosing bi > 0 for neurons i belonging to 1; and bj « 0 for neurons j not belonging to 1;, the quadratic Lyapunov function L defined in Theorem 1 forms a saddle in the nonnegative quadrant defined by 1;. The saddle point is the point where L restricted to the hyperplane defined by the k - 1 positive eigenvalues reaches its minimum. But because neurons can be initialized to lower values of L on either side of the hyperplane and because L is non-increasing along trajectories, there is no way trajectories can cross the hyperplane. In conclusion, we have constructed an input b for which the network is multistable. • (3) => (1). Suppose that (1) is false. Then for all b the Lyapunov function L is convex and so has only a single local minimum in the convex domain x ~ O. This local minimum is also the global minimum. The dynamics must converge to this minimum .• If I - W is positive definite, then a symmetric threshold-linear network has a unique steady state. This has been shown previously [5]. The next Theorem is an expansion of this result, stating an equivalent condition using the concept of permitted sets. Theorem 3 If W is symmetric, then the following conditions are equivalent: 1. The matrix I - W is positive definite. 2. All sets are permitted. 3. For all b there is a unique steady state, and it is stable. Proof: • (1) => (2). If I - W is positive definite, then it is copositive. Hence (1) in Theorem 2 is false and so (2) in Theorem 2 is false, e.g. all set are permitted. • (2) => (1). Suppose (1) is false, so the set of all neurons active must be forbidden, not all sets are permitted. • (1) {:::=> (3). See [5] .• The following Theorem characterizes the forbidden and the permitted sets. Theorem 4 Any subset of a permitted set is permitted. Any superset of a forbidden set is forbidden. Proof: According to the interlacing Theorem, if the smallest eigenvalue of a symmetric matrix is positive, then so are the smallest eigenvalues of all its principal submatrices. And, if the smallest eigenvalue of a principal submatrix is negative, then so is the smallest eigenvalue of the original matrix .• 4 An example - the ring network A symmetric threshold-linear network with local excitation and larger range inhibition has been studied in the past as a model for how simple cells in primary visual cortex obtain their orientation tuning to visual stimulation [6, 7]. Inspired by these results, we have recently built an electronic circuit containing a ring network, using analog VLSI technology [3]. We have argued that the fixed tuning width of the neurons in the network arises because active sets consisting of more than a fixed number of contiguous neurons are forbidden. Here we give a more detailed account of this fact and provide a surprising result about the existence of some spurious permitted sets. Let the synaptic matrix of a 10 neuron ring-network be translationally invariant. The connection between neurons i and j is given by Wij = -(3 +o:oclij + 0:1 (cli,j+l + cli+l,j) + 0:2 (cli,j+2 + cli+2,j), where (3 quantifies global inhibition, 0:0 self-excitation, 0:1 first-neighbor lateral excitation and 0:2 second-neighbor lateral excitation. In Figure 1 we have numerically computed the permitted sets of this network, with the parameters taken from [3], e.g. 0:0 = 0 0:1 = 1.1 0:2 = 1 (3 = 0.55. The permitted sets were determined by diagonalising the 210 square sub-matrices of I - W and by classifying the eigenvalues corresponding to nonnegative eigenvectors. The Figure 1 shows the resulting parent permitted sets (those that have no permitted supersets). Consistent with the finding that such ring-networks can explain contrast invariant tuning of VI cells and multiplicative response modulation of parietal cells, we found that there are no permitted sets that consist of more than 5 contiguous active neurons. However, as can be seen, there are many non-contiguous permitted sets that could in principle be activated by exciting neurons in white and strongly inhibiting neurons in black. Because the activation of the spurious permitted sets requires highly specific input (inhibition of high spatial frequency), it can be argued that the presence of the spurious permitted sets is not relevant for the normal operation of the ring network, where inputs are typically tuned and excitatory (such as inputs from LGN to primary visual cortex). Neuron number ..... Q.) .0 E :::l C ID en "0 -~ E Q.) a.... Neuron number Figure 1: Left: Output of a ring network of 10 neurons to uniform input (random initial condition). Right: The 9 parent permitted sets (x-axis: neuron number, y-axis: set number). White means that a neurons belongs to a set and black means that it does not. Left-right and translation symmetric parent permitted sets of the ones shown have been excluded. The first parent permitted set (first row from the bottom) corresponds to the output on the left. 5 Discussion We have shown that pattern memorization in threshold linear networks can be viewed in terms of permitted sets of neurons, e.g. sets of neurons that can be coactive at a steady state. According to this definition, the memories are stored by the synaptic weights, independently of the inputs. Hence, this concept of memory does not suffer from input-dependence, as would be the case for a definition of memory based on the fixed points of the dynamics. Pattern retrieval is strongly constrained by the input. A typical input will not allow for the retrieval of arbitrary stored permitted sets. This comes from the fact that multistability is not just dependent on the existence of forbidden sets, but also on the input (theorem 2). For example, in the ring network, positive input will always retrieve permitted sets consisting of a group of contiguous neurons, but not any of the spurious permitted sets, Figure 1. Generally, multistability in the ring network is only possible when more than a single neuron is excited. Notice that threshold-linear networks can behave as traditional attractor networks when the inputs are represented as initial conditions of the dynamics. For example, by fixing b = 1 and initializing a copositive network with some input, the permitted sets unequivocally determine the stable fixed points. Thus, in this case, the notion of permitted sets is no different from fixed point attractors. However, the hierarchical grouping of permitted sets (Theorem 4) becomes irrelevant, since there can be only one attractive fixed point per hierarchical group defined by a parent permitted set. The fact that no permitted set can have a forbidden subset represents a constraint on the possible computations of symmetric networks. However, this constraint does not have to be viewed as an undesired limitation. On the contrary, being aware of this constraint may lead to a deeper understanding of learning algorithms and representations for constraint satisfaction problems. We are reminded of the history of perceptrons, where the insight that they can only solve linearly separable classification problems led to the invention of multilayer perceptrons and backpropagation. In a similar way, grouping problems that do not obey the natural hierarchy inherent in symmetric networks, might necessitate the introduction of hidden neurons to realize the right geometry. For the interested reader, see also [8] for a simple procedure of how to store a given family of possibly overlapping patterns as permitted sets. References [1] J. J. Hopfield. Neurons with graded response have collective properties like those of two-state neurons. Proc. Natl. Acad. Sci. USA, 81:3088- 3092, 1984. [2] M.A. Cohen and S. Grossberg. Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Transactions on Systems, Man and Cybernetics, 13:288- 307,1983. [3] Richard H.R. Hahnloser, Rahul Sarpeshkar, Misha Mahowald, Rodney J . Douglas, and Sebastian Seung. Digital selection and ananlog amplification coexist in a silicon circuit inspired by cortex. Nature, 405:947- 51, 2000. [4] R.A. Horn and C.R. Johnson. Matrix analysis. Cambridge University Press, 1985. [5] J. Feng and K.P. Hadeler. Qualitative behaviour of some simple networks. J. Phys. A:, 29:5019- 5033, 1996. [6] R. Ben-Yishai, R. Lev Bar-Or, and H. Sompolinsky. Theory of orientation tuning in visual cortex. Proc. Natl. Acad. Sci. USA, 92:3844- 3848, 1995. [7] R.J. Douglas, C. Koch, M.A. Mahowald, K.A.C. Martin, and H. Suarez. Recurrent excitation in neocortical circuits. Science, 269:981- 985, 1995. [8] Xie Xiaohui, Richard H.R. Hahnloser, and Sebastian Seung. Learning winnertake-all competition between groups of neurons in lateral inhibitory networks. In Proceedings of NIPS2001 - Neural Information Processing Systems: Natural and Synthetic, 2001.
|
2000
|
105
|
1,760
|
Competition and Arbors in Ocular Dominance Peter Dayan Gatsby Computational Neuroscience Unit, UCL 17 Queen Square, London, England, WCIN 3AR. da ya n @gat sby.uc l.a c .uk Abstract Hebbian and competitive Hebbian algorithms are almost ubiquitous in modeling pattern formation in cortical development. We analyse in theoretical detail a particular model (adapted from Piepenbrock & Obermayer, 1999) for the development of Id stripe-like patterns, which places competitive and interactive cortical influences, and free and restricted initial arborisation onto a common footing. 1 Introduction Cats, many species of monkeys, and humans exibit ocular dominance stripes, which are alternating areas of primary visual cortex devoted to input from (the thalamic relay associated with) just one or the other eye (see Erwin et aI, 1995; Miller, 1996; Swindale, 1996 for reviews of theory and data). These well-known fingerprint patterns have been a seductive target for models of cortical pattern formation because of the mix of competition and cooperation they suggest. A wealth of synaptic adaptation algorithms has been suggested to account for them (and also the concomitant refinement of the topography of the map between the eyes and the cortex), many of which are based on forms of Hebbian learning. Critical issues for the models are the degree of correlation between inputs from the eyes, the nature of the initial arborisation of the axonal inputs, the degree and form of cortical competition, and the nature of synaptic saturation (preventing weights from changing sign or getting too large) and normalisation (allowing cortical and/or thalamic cells to support only a certain total synaptic weight). Different models show different effects of these parameters as to whether ocular dominance should form at all, and, if it does, then what determines the widths of the stripes, which is the main experimental observable. Although particular classes of models excite fervid criticism from the experimental community, it is to be hoped that the general principles of competitive and cooperative pattern formation that underlie them will remain relevant. To this end we seek models in which we can understand the interactions amongst the various issues above. Piepenbrock & Obermayer (1999) suggested an interesting model in which varying a single parameter spans a spectrum from cortical competition to cooperation. However, the nature of competition in their model makes it hard to predict the outcome of adaptation completely, except in some special cases. In this paper, we suggest a slightly different model of competition which makes the analysis tractable, and simultaneously generalise the model to consider an additional spectrum between flat and peaked arborisation. 2 The Model Figure 1 depicts our model. It is based on the competitive model of Piepenbrock & Obermayer (1999), who developed it in order to explore a continuum between competitive and linear cortical interactions. We use a slightly different competition mechanism and also B c D ocularity A L A R L W R w veal cortex.-- competitive interaction W'(a,b)A(a,b)~ W '(a,b)A (a,b) o 60000 0 0000 a u'(b) left thalamus right u'(b) b Figure 1: Competitive ocular dominance model. A) Left (L) and right (R) input units (with activities uL (b) and uR(b) at the same location b in input space) project through weights WL(a, b) and WR(a, b) and a restricted topography arbor function A(a, b) (B) to an output layer, which is subject to lateral competitive interactions. C) Stable weight patterns W(a, b) showing ocular dominance. D) (left) difference in the connections W- = W R - W L from right and left eye; (right) sum difference across b showing the net ocularity for each a. Here, O"A = 0.2, 0"[ = 0.08, O"u = 0.075, f3 = 10, I = 0.95, n = 3. There are N = 100 units in each input layer and the output layer. Circular (toroidal) boundary conditions are used with bE [0, 1). extend the model with an arbor function (as in Miller et aI, 1989). The model has two input layers (representing input from the thalamus from left 'L' and right 'R' eyes), each containing N units, laid out in a single spatial dimension. These connect to an output layer (layer IV of area VI) with N units too, which is also laid out in a single spatial dimension. We use a continuum approximation, so labeling weights W L ( a, b) and W R ( a, b) . An arbor function, A(a, b), represents the multiplicity of each such connection (an example is given in figure IB). The total strengths of the connections from b to a are the products WL(a,b)A(a, b) and WR(a,b)A(a, b). Four characteristics define the model: the arbor function, the statistics of the input; the mapping from input to output; and the rule by which the weights change. The arbor function A(a, b) specifies the basic topography of the map at the time that the pattern of synaptic growth is being established. We consider A(a, b) ()( e-(a-b)2 /20-1 , where O"A is a parameter specifies its width (figure IB). The two ends of the spectrum for the arbor are fiat, when A(a, b) = 0: is constant (O"A = 00), and rigid or punctate, when A(a, b) ()( c5(a - b) (O"A = 0) and so input cells are mapped only to their topographically matched cells in the cortex. The second component of the model is the input. Since the model is non-linear, pattern formation is a function of aspects of the input in addition to the two-point correlations between input units that drive development of standard, non-competitive, Hebbian models. We follow Piepenbrock & Obermayer (1999) and consider highly spatially simplified input activities at location b in the left (uL (b) and right (uR (b) projections, refiecting just a single Gaussian bump (of width oV) which is stronger to the tune of I in (a randomly chosen) one of the input projections than the other uL(b) = 0.5(1 + zl)e-(b-e)2/20-~ uR(b) = 0.5(1- zl)e-(b-e) 2 /20-~ (1) where ~ E [0,1) is the randomly chosen input location, z is -lor 1 (with probability 0.5 each), and determines whether the input is more from the right or left projection. 0::::: I ::::: 1 governs the weakness of correlations between the projections. The third component of the model is the way that input activities and the weights conspire to form output activities. This happens in linear (I), competitive (c) and interactive (i) steps: I: v(a) = JdbA(a,b) (WL(a,b)uL(b) + WR(a,b)uR(b)) , (2) c : v~a) = (v(a))/3 / Jda' (v(a'))/3 i : vi(a) = Jda' I(a, a')v~a) (3) Weights, arbor and input and output activities are all positive. In equation 3c, f3 ~ 1 is a parameter governing the strength of competition between the cortical cells. As f3 -+ 00, the activation process becomes more strongly competitive, ultimately having a winner-takes-all effect as in the standard self-organising map. This form of competition makes it possible to perform analyses of pattern formation that are hard for the model of Piepenbrock & Obermayer (1999). A natural form for the cortical interactions of equation 3i is the purely positive Gaussian I(a, at) = e-(a-a')2/2o} . The fourth component of the model is the weight adaptation rule which involves the Hebbian correlation between input and output activities, averaged over input patterns ez. The weights are constrained W(a, b) E [0,1], and also multiplicatively normalised so fdbA(a, b)(WL(a, b) + WR(a, b)) = n, for all a. WL(a, b) -+ WL(a, b) + E( (vi(a)uL(b))~z - A(a)WL(a, b)) . (4) (similarly for WR) where A(a) = A(a)(WL, WR) is chosen to enforce normalisation. The initial values for the weights are WL,R = we-(a-b)2/20'~ +1]8WL,R, where w is chosen to satisfy the normalisation constraints, 1] is small, and 8WL(a, b) and 8WR(a, b) are random perturbations constrained so that normalisation is still satisfied. Values of u~ < 00 can emerge as equilibrium values of the weights if there is sufficient competition (sufficiently large (3) or a restricted arbor (ul < 00). 3 Pattern Formation We analyse pattern formation in the standard manner, finding the equilibrium points (which requires solving a non-linear equation), linearising about them and finding which linear mode grows the fastest. By symmetry, the system separates into two modes, one involving the sum of the weight perturbations 8W+ =8WR+8WL, which governs the precision of the topography of the final mapping, and one involving the difference 8W+ = 8WR-;5WL, which governs ocular dominance. The development of ocular dominance requires that a mode of 8W- (a, b) # 0 grows, for which each output cell has weights of only one sign (either positive or negative). The stripe width is determined by changes in this sign across the output layer. Figure 1 C;D show the sort of patterns for which we would like to account. Equilibrium solution The equilibrium values of the weights can be found by solving (5) for the A+ determined such that the normalisation constraint fdb W L (a, b) + W R ( a, b) = n is satisfied for all a. v(a) is a non-linear function of the weights; however, the simple form of the inputs means that at least one set of equilibrium values of WL(a, b) and WR(a, b) are the same, WL(a, b) = we-(a-b)2 /20'~ for a particular width Uw that depends on I = 1/ ul, A = 1/ ul, U = 1/ ub and (3 according to a simple quadratic equation. We assume that w < 1, so the weights do not reach their upper saturating limit, and this implies thatw = 2~J(A + W)/1l'. The quadratic equation governing the equilibrium width can be derived by postulating Gaussian weights, and finding the values successively of v(a), v"(a) and vi(a) of equations 2 and 3, calculating ((vi(a)uL (b)) ~z and finding a consistency condition that W must satisfy in orderfor W L (a, b) -+ W L (a, b) in equation 4. The result is (((3 + I)I + (3U)W2 + (A(((3 + I)I + (3U) - ((3 - I)UI)W - (3AIU = 0 (6) Figure 2 shows how the resulting physically realisable (W > 0) equilibrium value of Uw depends on (3, UA and UI, varying each in turn about a single set of values in figure 1. Figure 2A shows that the width rapidly asymptotes as (3 grows, and it only gets large as the arbor function gets large for (3 near 1. Figure 2B shows this in another way. For (3 = 1 (the dotted line), which quite closely parallels the non-competitive case of Miller et al (1989), A 0.5.------, 0.3 ':.. ITA = 2.0 OW~ 0.1 -".:~:.""!0:"':!.0~00~1-~ 10° (3 101 B fJ = 1.0 .. ··· "~.~ . ., .". , .. _' _.IJ·;'~_ = 1.25 fJ 10 C 0 0.5 0.3 0.1 Figure 2: Log-log plots of the equilibrium values of ow in the case of multiplicative normalisation. Solid lines based on parameters as in figure 1 (aA = 0.2, a[ = 0.08, au = 0.075, fl = 10). A) aw as a function of fl for aA = 0.2 (solid), aA = 2.0 (dotted) and aA = 0.0001 (dashed). B) aw as a function of aA for fl = 10 (solid), fl = 1.25 (dashed) and fl = 1.0 (dotted). C) aw as a function of a[. Other parameters as for the solid lines. aw grows roughly like the square root of aA as the arborisation gets flatter. For any (3 > 1, one equilibrium value of aw has a finite asymptote with UA. For absolutely flat topography (UA = 00) and (3 > 1, there are actually two equilibrium values for uw, one with Uw = 00, ie flat weights; the other with Uw taking values such as the asymptotic values for the dotted and solid lines in figure 2B. The sum mode The update equation for (normalised) perturbations to the sum mode is 8W+ (a, b) -t (1 - f.A+)oW+(a, b) + f~ II daldbl O(a, b, al, bdoW+(al' bl) - f.A'(a)W+(a, b) (7) where the operator 0 = 0 1 - 0 2 is defined by averaging over ~ with z = 1, 'Y = 1 01 (a, b, aI, bl ) = (I da2I(a, a2)v"(a2) 6~t:t) A(al' bl)uR(bl)uR(b)) (8) 02(a,b,al,bl ) = (I da2I(a,a2)v"(a2)~t:SA(al,bt)uR(bl)uR(b)) , (9) where, for convenience, we have hidden the dependence of v(a) and v"(a) on ~ and z. Here, the values of A+ and A'Ca) = (3 III dbdaldbl A(a, b)O(a, b, aI, bl )8W+(al, bl )/2f2 (10) come from the normalisation condition. The value of A+ is determined by W+(a, b) and not by 8W+(al,bl ). Except in the special case that UA = 00, the term f.A'(a)W+(a,b) generally keeps stable the equilibrium solution. We consider the full eigenfunctions ofO(a, b, aI, bl ) below. However, the case that Piepenbrock & Obermayer (1999) studied of a flat arbor function (u A = 00) turns out to be special, admitting two equilibrium solutions, one flat, one with topography, whose stability depends on (3. For UA < 00, the only Gaussian equilibrium solution for the weights has a refined topography (as one might expect), and this is stable. This width depends on the parameters in a way shown in equation 6 and figure 2, in particular, reaching a non-zero asymptote even as (3 gets very large. The difference mode The sum mode controls the refinement of topography, whereas the difference mode controls the development and nature of ocular dominance. The equilibrium value of W- (a, b) is always 0, by symmetry, and the linearised difference equation for the mode is oW- (a, b) -t (l-f.A+)oW-(a, b) + fflt II daldbl O(a, b, al, bl)OW- (al' bd (11) n= 0 10.86 k= 2 3 0.81 2 0.06 o 10.86 0.00 2 0.00 o 0.81 0.81 Figure 3: Eigenfunctions and eigenvalues of 0 1 (left block), 0 2 (centre block), and and the theoretical and empirical approximations to 0 (right columns). Here, as in equation 12, k is the frequency of alternation of ocularity across the output (which is integral for a finite system); n is the order of the Hermite polynomial. The numbers on top of each eigenfunction is the associated eigenvalue. Parameters are as in figure 1 with I = 1. which is almost the same as equation 7 (with the same operator 0), except that the multiplier for the integral is (3"(2/2 rather than (3/2. Since "( < 1, the eigenvalues for the difference mode are therefore all less than those for the sum mode, and by the same fraction. The multiplicative decay term EA+JW- (a, b) uses the same A+ as equation 7, whose value is determined exclusively by properties of W+ (a, b); but the non-multiplicative term EA'(a)W+(a, b) is absent. Note that the equilibrium values of the weights (controlled by ow) affect the operator 0, and hence its eigenfunctions and eigenvalues. Provided that the arbor and the initial values of the weights are not both flat (aA =j:. 00 or aw =j:. 00), the principal eigenfunctions of 0 1 and 0 2 have the general form (12) where Pn(r, k) is a polynomial (related to a Hermite polynomial) of degree n in r whose coefficients depend on k. Here k controls the periodicity in the projective field of each input cell b to the output cells, and ultimately the periodicity of any ocular dominance stripes that might form. The remaining terms control the receptive fields of the output cells. Operator 0 2 has zero eigenvalues for the polynomials of degree n > 0. The expressions for the coefficients of the polynomials and the non-zero eigenvalues of 0 1 and 0 2 are rather complicated. Figure 3 shows an example of this analysis. The left 4 x 3 block shows eigenfunctions and eigenvalues of 0 1 for k = 0 ... 5 and n = 0, 1, 2; the middle 4 x 3 block, the equivalent eigenfunctions and eigenvalues of 0 2 . The eigenvalues come essentially from a Gaussian, whose standard deviation is smaller for 0 2 . To a crude first approximation, therefore, the eigenvalues of 0 resemble the difference of two Gaussians in k, and so have a peak at a non-zero value of k, ie a finite ocular dominance periodicity. However, this approximation is too crude. Although the eigenfunctions of 0 1 and 0 2 shown in figure 3 look almost identical, they are, in fact, subtly different, since 0 1 and 0 2 do not commute (except for flat or rigid topography). The similarity between the eigenfunctions makes it possible to approximate the eigenfunctions of 0 very closely by expanding those of 0 2 in terms of 0 1 (or vice-versa). This only requires knowing the overlap between the eigenfunctions, which can be calculated analytically from their form in equation 12. Expanding for n ~ 2 leads to the approximate eigenfunctions and eigenvalues for 0 shown in the penultimate column on the right of figure 3. The difference, for instance, between the A B ':E: 10-3 10-2 10-' (II Figure 4: A) The constraint term >'+(0./ N) (dotted line) and the ocular dominance eigenvalues e(k)(Q/N) (solid line 7 = 1; dotted line 7 = 0.5) of /3720/2 as a function of C>[ , where k is the stripe frequency associated with the maximum eigenvalue. For C>[ too large, the ocular dominance eigenfunction no longer dominates. The star and hexagon show the maximum values of C>r such that ocular dominance can form in each case. The scale in (A) is essentially arbitrary. B) Stripe frequency k associated with the largest eigenvalue as a function of C>r. The star and hexagon are the same as in (A), showing that the critical preferred stripe frequency is greater for higher correlations between the inputs (lower 7). Only integer values are considered, hence the apparent aliasing. eigenfunction of 0 for k = 3 and those for 0 1 and 0 2 is striking, considering the similarity between the latter two. For comparison, the farthest right column shows empirically calculated eigenfunctions and eigenvalues of 0 (using a 50 x 50 grid). Putting 8W- back in terms of ocular dominance, we require that eigenmodes of 0 resembling the modes with n = 0 should grow more strongly than the normalisation makes them shrink; and then the value of k associated with the largest eigenvalue will be the stripe frequency that should be expected to dominate. For the parameters of figure 3, the case with k = 3 has the largest eigenvalue, and exactly this leads to the outcome of figure IC;D. 4 Results We can now predict the outcome of development for any set of parameters. First, the analysis of the behavior of the sum mode (including, if necessary, the point about multiple equilibria for flat initial topography) allows a prediction of the equilibrium value of c>w, which indicates the degree of topographic refinement. Second, this value of C>w can be used to calculate the value of the normalisation parameter ).+ that affects the growth of 8W+ and 8W-. There is then a barrier of 2),+ / f3'''-? that the eigenvalues of 0 must surmount for a solution that is not completely binocular to develop. Third, if the peak eigenvalue of o is indeed sufficiently large that ocular dominance develops, then the favored periodicity is set by the value of k associated with this eigenvalue. Of course, if many eigenfunctions have similarly large eigenvalues, then slightly different stripe periodicities may be observed depending on the initial conditions. The solid line in figure 4A shows the largest eigenvalue of f37 2 0/2 as a function of the width of the cortical interactions C>[, for 7 = 1, the value of C>w specified through the equilibrium analysis, and values of the other parameters as in figure 1. The dashed line shows ).+, which comes from the normalisation. The largest value of C>[ for which ocular dominance still forms is indicated by the star. For 7 = 0.5, the eigenvalues are reduced by a factor of 7 2 = 0.25, and so the critical value of C>[ (shown by the hexagram) is reduced. Figure 4B shows the frequency of the stripes associated with the largest eigenvalue. The smaller C>[ , the greater the frequency of the stripes. This line is jagged because only integers are acceptable as stripe frequencies. Figure 5 shows the consequences of such relationships slightly differently. Some models consider the possibility that C>[ might change during development from a large to a small value. If the frequency of the stripes is most strongly determined by the frequency that grows fastest when C>[ is first sufficiently small that stripes grow, we can analyse plots such as those in figure 4 to determine the outcome of development. The figures in the top row Ii = 1.5 Ii = 10 ~;:D/ ~ !~lLj 1 0.5 1 1 0.5 1 Ii = 100 Ii = 1.5 Ii = 10 Ii = 100 O'A=2.0 / Joe G: ~ ' ...... / ~ l '--=~~='----:! k '~ 1 0 5 > ·' 1 0 1 '~:~-:-'-.:-, 0 1 :;~ .... , Figure 5: First three figures: maximal values of fr[ for which ocular dominance will develop as a function of /. All other parameters as in figure 1, except that frA = 0.2 (solid), frA = 2.0 (dashed); frA = 0.0001 (dotted). Last three figures: value of stripe frequency k associated with the maximal eigenvalue for parameters as in the left three plots at the critical value of fr[. show the largest values of fr[ for which ocular dominance can develop; the bottom plots show the stripe frequencies associated with these critical values of fr[ (like the stars and hexagons in figure 4), in both cases as a function of /. The columns are for successively larger values of fJ; within each plot there are three lines, for frA =0.0001 (dotted); frA =0.2 (solid), and frA = 2.0 (dashed). Where no value of fr[ permits ocular dominance to form, no line is shown. From the plots, we can see that the more similar the inputs, (the smaller 'Y) or the less the competition (the smaller fJ), the harder it is for ocular dominance to form. However, if ocular dominance does form, then the width of the stripes depends only weakJy on the degree of competition, and slightly more strongly on the width of the arbors. The narrower the arbor, the larger the frequency of the stripes. For rigid topography, as frA -t 0, the critical value of fr[ depends roughly linearly on 'Y. We analyse this case in more detail below. Note that the stripe width predicted by the linear analysis does not depend on the correlation between the input projections unless other parameters (such as a[) change, although ocular dominance might not develop for some values of the parameters. 5 Discussion The analytical tractability of the model makes it possible to understand in depth the interaction between cooperation, competition, correlation and arborisation. Further exploration of this complex space of interactions is obviously required. Simulations across a range of parameters have shown that the analysis makes correct predictions, although we have only analysed linear pattern formation. Non-linear stability turns out to playa highly significant role in higher dimensions (such as the 2d ocular dominance stripe pattern) where a continuum of eigenmodes share the same eigenvalues (Bressloff & Cowan, personal communication), and also in Id models involving very strong competition (fJ -t 00) like the self-organising map (Kohonen, 1995). Acknowledgements Funded by the Gatsby Charitable Foundation. I am very grateful to Larry Abbott, Ed Erwin, Geoff Goodhill, John Hertz, Ken Miller, Klaus Obermayer, Read Montague, Nick Swindale, Peter Wiesing and David Willshaw for discussions and to Zhaoping Li for making this paper possible. References Erwin, E, Obermayer, K & Schulten, K (1995) Neural Computation 7:425-468. Kohonen, T (1995) Self-Organizing Maps. Berlin, New York:Springer-Verlag. Miller, KD (1996) In E Domany, JL van Hemmen & K Schulten, eds, Models of Neural Networks, Ill. New York:Springer-Verlag, 55-78. Miller, KD, Keller, JB & Stryker, MP (1989) Science 245:605-615. Piepenbrock, C & Obermayer, K (1999). In MS Keams, SA SoBa & DA Cohn, eds, Advances in Neuralfnformalion Processing Systems, fl. Cambridge, MA: MIT Press. Swindale, NV (1996) Network: Computation in Neural Systems 7: 161-247.
|
2000
|
106
|
1,761
|
Learning Switching Linear Models of Human Motion Vladimir Pavlovic and James M. Rehg Compaq - Cambridge Research Lab Cambridge, MA 02139 {vladimir.pavlovic,jim.rehg}@compaq.com Abstract John MacCormick Compaq - System Research Center Palo Alto, CA 94301 {john.maccormick} @compaq.com The human figure exhibits complex and rich dynamic behavior that is both nonlinear and time-varying. Effective models of human dynamics can be learned from motion capture data using switching linear dynamic system (SLDS) models. We present results for human motion synthesis, classification, and visual tracking using learned SLDS models. Since exact inference in SLDS is intractable, we present three approximate inference algorithms and compare their performance. In particular, a new variational inference algorithm is obtained by casting the SLDS model as a Dynamic Bayesian Network. Classification experiments show the superiority of SLDS over conventional HMM's for our problem domain. 1 Introduction The human figure exhibits complex and rich dynamic behavior. Dynamics are essential to the classification of human motion (e.g. gesture recognition) as well as to the synthesis of realistic figure motion for computer graphics. In visual tracking applications, dynamics can provide a powerful cue in the presence of occlusions and measurement noise. Although the use of kinematic models in figure motion analysis is now commonplace, dynamic models have received relatively little attention. The kinematics of the figure specify its degrees of freedom (e.g. joint angles and torso pose) and define a state space. A stochastic dynamic model imposes additional structure on the state space by specifying a probability distribution over state trajectories. We are interested in learning dynamic models from motion capture data, which provides a training corpus of observed state space trajectories. Previous work by a number of authors has applied Hidden Markov Models (HMMs) to this problem. More recently, switching linear dynamic system (SLDS) models have been studied in [5, 12]. In SLDS models, the Markov process controls an underlying linear dynamic system, rather than a fixed Gaussian measurement model. l By mapping discrete hidden states to piecewise linear measurement models, the SLDS framework has potentially greater descriptive power than an HMM. Offsetting this advantage is the fact that exact inference in SLDS is intractable. Approximate inference algorithms are required, which in turn complicates SLDS learning. In this paper we present a framework for SLDS learning and apply it to figure motion modeling. We derive three different approximate inference schemes: Viterbi [13], variational, and GPB2 [1]. We apply learned motion models to three tasks: classification, motion synthesis, and visual tracking. Our results include an empirical comparison between SLDS I SLDS models are sometimes referred to as jump-linear or conditional Gaussian models, and have been studied in the controls and econometrics literatures. a, (a) SLDS as a Bayesian net. (b) Factorization of SLDS. Figure 1: (a) SLDS model as Dynamic Bayesian Network. s is discrete switch state, x is continuous state, and y is its observation. (b) Factorization of SLDS into decoupled HMM and LDS. and HMM models on classification and one-step ahead prediction tasks. The SLDS model class consistently outperforms standard HMMs even on fairly simple motion sequences. Our results suggest that SLDS models are a promising tool for figure motion analysis, and could play a key role in applications such as gesture recognition, visual surveillance, and computer animation. In addition, this paper provides a summary of approximate inference techniques which is lacking in the previous literature on SLDS. Furthermore, our variational inference algorithm is novel, and it provides another example of the benefit of interpreting classical statistical models as (mixed-state) graphical models. 2 Switching Linear Dynamic System Model A switching linear dynamic system (SLDS) model describes the dynamics of a complex, nonlinear physical process by switching among a set of linear dynamic models over time. The system can be described using the following set of state-space equations, Xt+1 = A(st+dxt + Vt+1(St+1), Yt = eXt + Wt, Pr(st+1 = ilst = j) = II(i,j), for the plant and the switching model. The meaning of the variables is as follows: X t E lRN denotes the hidden state of the LDS, and Vt is the state noise process. Similarly, Yt E lRM is the observed measurement and Wt is the measurement noise. Parameters A and e are the typical LDS parameters: the state transition matrix and the observation matrix, respectively. We assumed that the LDS models a Gauss-Markov process with i.i.d. Gaussian noise processes Vt (St) '" N(O, Q(St)). The switching model is a discrete first order Markov process with state variables St from a set of S states. The switching model is defined with the state transition matrix II and an initial state distribution 71"0. The LDS and switching process are coupled due to the dependence of the LDS parameters A and Q on the switching state S t: A(st = i) = Ai, Q(St = i) = Qi. The complex state space representation is equivalently depicted by the DBN dependency graph in Figure lea). The dependency graph implies that the joint distribution P(YT, XT, ST) over the variables of the SLDS can be written as Pr(so) TI;~l Pr(st ISt-dPr(xo Iso) TI;=-;.l Pr(xt IXt-l, St) TI;=~l Pr(Yt IXt), (1) where YT, XT, and ST denote the sequences (of length T) of observations and hidden state variables. From the Gauss-Markov assumption on the LDS and the Markov switching assumption, we can expand Equation I into the parameterized joint pdf of the SLDS of duration T. Learning in complex DBNs can be cast as ML learning in general Bayesian networks. The generalized EM algorithm can then be used to find optimal values of DBN parameters {A, e, Q, R, II, 7I"0}. Inference, which is addressed in the next section, is the most complex step in SLDS learning. Given the sufficient statistics from the inference phase, the parameter update equations in the maximization (M) step are easily obtained by maximizing the expected log of Equation 1 with respect to the LDS and Me parameters (see [13]). 3 Inference in SLDS The goal of inference in complex DBNs is to estimate the posterior P(XT, STIYT). If there were no switching dynamics, the inference would be straightforward - we could infer X T from YT using LDS inference. However, the presence of switching dynamics makes exact inference exponentially hard, as the distribution of the system state at time t is a mixture of st Gaussians. Tractable, approximate inference algorithms are therefore required. We describe three methods: Viterbi, variational, and generalized Pseudo Bayesian. 3.1 Approximate Viterbi Inference Viterbi approximation approach finds the most likely sequence of switching states Sf for a given observation sequence YT. Namely, the desired posterior P(XT,STIYT) is approximated by its mode Pr(XTISf,YT). It is well known how to apply Viterbi inference to discrete state hidden Markov models and continuous state Gauss-Markov models. Here we review an algorithm for approximate Viterbi inference in SLDSs presented in [13]. We have shown in [13] that one can use a recursive procedure to find the best switching sequence Sf = argmaxsT Pr(STIYT). In the heart of this recursion lays the approximation of the partial probability of the swiching sequence and observations up to time t, Jt,i = maxs'_l Pr (St-l, St = i, Yt) R:J maxdPr(Ytlst =i,St-l =j,S;_2(j),Yt-l)Pr(St =ilst - 1 =j)Jt-1,j}. (2) The two scaling components are the likelihood associated with the transition i ~ j from t to t - 1, and the probability of discrete SLDS switching from j to i. They have the notion of a "transition probability" and we denote them together by J tlt-l ,i,j The likelihood term can easily be found using Kalman updates, concurent with the recursion of Equation 2. See [13] for details. The Viterbi inference algorithm can now be written Initialize LDS state estimates XOI-l,i and E01-1,i ; Initialize JO ,i ; fort=l:T-l fori=l:S forj=l:S end Predict and filter LDS state estimates xt It ,i,j and E tlt ,i ,j; Find j -+ i "transition probability" J tit - 1. i ,j ; Find best transition '!f;t - 1 i into state i; Update sequence probabilities Jt • i and LDS slate estimates Xtl t , i and E t It ,i; end end Find "best" final switching state i;'_l and backtrace the best switching sequence S;' ; Do RTS smoothing for S = s.;..; 3.2 Approximate Variational Inference A general structured variational inference technique for Bayesian networks is described in [8]. Namely, an 1]-parameterized distribution Q(1]) is constructed which is "close" to the desired conditional distribution P but is computionally feasible. In our case we define Q by decoupling the switching and LDS portions of SLDS as shown in Figure l(b). The original distribution is factorized into two independent distributions, a Hidden Markov Model (HMM) Q s with variational parameters {qo, ... , qT-l} and a time-varying LDS Q x with variational parameters {xo,Ao, ... , AT-1,Qo, ... ,QT-d. The optimal values of the variational parameters TJ are obtained by minimizing the KLdivergence w.r.t. TJ. For example, we arrive at the following optimal variational parameters: Ot1 = At = log qt( i) = To obtain the terms Pr(St) = Pr(stlqo, ... , qT-t) we use the inference in the HMM with output "probabilities" qt. Similarly, to obtain (Xt) = E[XtIYT] we perform LDS inference in the decoupled time-varying LDS via RTS smoothing. Equation 3 together with the inference solutions in the decoupled models form a set of fixed-point equations. Solution of this fixed-point set is a tractable approximation to the intractable inference of the fully coupled SLDS. The variational inference algorithm for fully coupled SLDSs can now be summarized as: error = 00 ; Initialize P r CSt) ; while (KL divergence> maxError) Find Qt, At, XO [TOm PrCSt) (Eq. 3); Estimate ( Xt) I (Xt Xt ') and ( Xt Xt - 1') from Yt using time-varying LDS inference; Find qt from (xt) I (xt Xt') and (XtXt_l') (Eq. 3); Estimate Pr (St) from qt using HMM inference. end Variational parameters in Equation 3 have intuitive interpretation. LDS parameters At and Ot1 define the best unimodal representation of the corresponding switching system and are, roughly, averages of original parameters weighted by a best estimates of the switching states P(St). HMM variational paremeters log qt, on the other hand, measure the agreement of each individual LDS with the data. 3.3 Approximate Generalized Pseudo Bayesian Inference The Generalized Psuedo Bayesian [1, 9] (GPB) approximation scheme is based on the general idea of "collapsing" a mixture of Mt Gaussians onto a mixture of Mr Gaussians, where r < t (see [12] for a detailed review). While there are several variations on this idea, our focus is the GPB2 algorithm, which maintains a mixture of M 2 Gaussians over time and can be reformulated to include smoothing as well as filtering. GPB2 is closely related to the Viterbi approximation of Section 3.1. Instead of picking the most likely previous switching state j, we collapse the S Gaussians (one for each possible value of j) down into a single Gaussian. Namely, the state at time t is obtained as X tlt,i = Lj Xtlt,i,jPr(St-l = jiSt = i, Yt) . Smoothing in GPB2 is unfortunately a more involved process that includes several additional approximations. Details of this can be found in [12]. Effectively, an RTS smoother can be constructed when an assumption is made that decouples the MC model from the LDS when smoothing the MC states. Together with filtering this results in the following GPB2 algorithm pseudo code Initialize LDS state estimates x 01-1, i and Eo 1_ I, i; Initialize Pr(sQ = il 1) = .. (i); fort=1:T-1 end fori=1:8 forj=1:8 end Predict and filter LDS state estimates Xtl t ,i,i ' Etl t, i,j ; Find switching state distributions Prest = ilYt), Pr(St-l = jist = i, Yt); Collapse Xtlt,i,j ' Etlt,i,j to Xtlt,i , Etlt,i; Collapse Xtlt,i and Etlt,i to Xtlt and E tlt ; end Do GPB2 smoothing; The inference process of GPB2 is more involved than those of the Viterbi or the variational approximation. Unlike Viterbi, GPB2 provides soft estimates of switching states at each time t. Like Viterbi GPB2 is a local approximation scheme and as such does not guarantee global optimality inherent in the variational approximation. Some recent work (see [3]) on this type of local approximation in general DBN s has emerged that provides conditions for it to be globally optimal. 4 Previous Work SLDS models and their equivalents have been studied in statistics, time-series modeling, and target tracking since early 1970's. See [13,12] for a review. Ghahramani [6] introduced a DBN-framework for learning and approximate inference in one class of SLDS models. His underlying model differs from ours in assuming the presence of S independent, white noise-driven LDSs whose measurements are selected by the Markov switching process. A switching framework for particle filters applied to dynamics learning is described in [2]. Manifold learning [7] is another approach to constraining the set of allowable trajectories within a high dimensional state space. An HMM-based approach is described in [4]. 5 Experimental Results The data set for our experiments is a corpus of 18 sequences of six individuals performing walking and jogging. Each sequence was approximately 50 frames in duration. All of the motion was fronto-parallel (i.e. occured in a plane that was parallel to the camera plane, as in Figure 2(c).) This simplifies data acquisition and kinematic modeling, while self-occlusions and cluttered backgrounds make the tracking problem non-trivial. Our kinematic model had eight DOF's, corresponding to rotations at the knees, hip, and neck (and ignoring the arms). The link lengths were adjusted manually for each person. The first task we addressed was learning HMM and SLDS models for walking and running. Each of the two motion types were modeled as one, two, or four-state HMM and SLDS models and then combined into a single complex jog-walk model. In addition, each SLDS motion model was assumed to be of either the first or the second order 2. Hence, a total of three models (HMM, first order SLDS, and second order SLDS) were considered for each cardinality (one, two, or four) of the switching state. HMM models were initially assumed to be fully connected. Their parameters were then learned using the standard EM learning, initialized by k-means clustering. Learned HMM models were used to initialize the switching state segmentations for the SLDS models. The SLDS model parameters (A, Q, R, xo, II, 71"0) were then reestimated using EM. The inference) in SLDS learning was accomplished using the three approximated methods outlined in Section 3: Viterbi, GPB2, and variational inference. Results of SLDS learning using either of the three approximate inference methods did not produce significantly different models. This can be explained by the fact that initial segmentations using the HMM and the initial SLDS parameters were all very close to a 2Second order SLDS models imply Xt = A 1(st)Xt-1 + A 2 (st)Xt-2. ~:~ [ I : LJ l "'f walk : : :LJ ~:': f n I : : LJ l ~:~ [ : :LJ "'p;;; .",k : :- :u l ~:: [ : • :LJ V"';"liD",1I v'"""'"" "'b;JL;--J .,", : .. u l ;::~ : :LJ """ ~::~ : LN l "'bZL2 . ok : :LJ " '" :50 100 (a) One switching state, second order (b) Four switching states, second order SLDS. SLDS. (c) KF, (d) SLDS, (e) SLDS, (f) Synthesized walking motion frame 7 frame 7 frame 20 Figure 2: (a)-(d) show an example of classification results on mixed walk-jog sequences using models of different order. (e)-(g) compare constant velocity and SLDS trackers, and (h) shows motion synthesis. locally optimal solution and all three inference schemes indeed converged to the same or similar posteriors. We next addressed the classification of unknown motion sequences in order to test the relative performance of inference in HMM and SLDS. Test sequences of walking and jogging motion were selected randomly and spliced together using B-spline smoothing. Segmentation of the resulting sequences into "walk" and "jog" regimes was accomplished using Viterbi inference in the HMM model and approximate Viterbi, GPB2, and variational inference under the SLDS model. Estimates of "best" switching states Pr(St) indicated which of the two models were considered to be the source of the corresponding motion segment. Figure 2(a)-(b) shows results for two representative combinations of switching state and linear model orders. In Figure 2(a), the top graph depicts the true sequence of jog-walk motions, followed by Viterbi, GPB2, variational, and HMM classifications. Each motion type Gog and walk) is modeled using one switching state and a second order LDS. Figure 2(b) shows the result when the switching state is increased to four. The accuracy of classification increases with the order of the switching states and the LDS model order. More interesting, however, is that the HMM model consistently yields lower segmentation accuracy then all of the SLDS inference schemes. This is not surprising since the HMM model does not impose continuity across time in the plant state space (x), which does indeed exist in a natural figure motion Goint angles evolve continuously in time.) Quantitatively, the three SLDS inference schemes produce very similar results. Qualitatively, GPB2 produces "soft" state estimates, while the Viterbi scheme does not. Variational is somewhere in-between. In terms of computational complexity, Viterbi seems to be the clear winner. Our next experiment addressed the use of learned dynamic models in visual tracking. The primary difficulty in visual tracking is that joint angle measurements are not readily available from a sequence of image intensities. We use image templates for each link in the figure model, initialized from the first video frame, to track the figure through template registration [11]. A conventional extended Kalman filter using a constant velocity dynamic model performs poorly on simple walking motion, due to pixel noise and self-occlusions, and fails by frame 7 as shown in Figure 2(c). We employ approximate Viterbi inference in SLDS as a multi-hypothesis predictor that initializes multiple local template searches in the image space. From the S2 multiple hypotheses Xtlt-l,i,j at each time step, we pick the best S hypothesis with the smallest switching cost, as determined by Equation 2. Figure 2(d)2(e) show the superior performance of the SLDS tracker on the same image sequence. The tracker is well-aligned at frame 7 and only starts to drift off by frame 20. This is not terribly surprising since the SLDS tracker has effectively S (extended) Kalman filters, but it is an encouraging result. The final experiment simulated walking motion by sampling from a learned SLDS walking model. A stick figure animation obtained by superimposing 50 frames of walking is shown in Figure 2(f). The discrete states used to generate the motion are plotted at the bottom of the figure. The synthesized walk becomes less realistic as the simulation time progresses, due to the lack of global constraints on the trajectories. 6 Conclusions Dynamic models for human motion can be learned within a Switching Linear Dynamic System (SLDS) framework. We have derived three approximate inference algorithms for SLDS: Viterbi, GPB2, and variational. Our variational algorithm is novel in the SLDS domain. We show that SLDS classification performance is superior to that of HMMs. We demonstrate that a tracker based on SLDS is more effective than a conventional Extended Kalman Filter. We show synthesis of natural walking motion by sampling. In future work we will build more complex motion models using a much larger motion capture dataset, which we are currently building. We will also extend the SLDS tracker to more complex measurement models and complex discrete state processes (see [10] for a recent approach). References [1] Bar-Shalom and Li, Estimation and tracking: principles, techniques, and software. 1998. [2] A. Blake, B. North, and M. Isard, "Learning multi-class dynamics," in NIPS '98, 1998. [3] X. Boyen, N. Firedman, and D. Koller, "Discovering the hidden structure of complex dynamic systems," in Proc. Uncertainty in Artificial Intelligence, 1999. [4] M. Brand, ''An entropic estimator for structure discovery," in NIPS '98, 1998. [5] C. Bregler, "Learning and recognizing human dynamics in video sequences," in Proc. Int'l Con! Computer Vision and Pattern Recognition (CVPR), 1997. [6] Z. Ghahramani and G. E. Hinton, "Switching state-space models." 1998. [7] N. Howe, M. Leventon, and W. Freeman, "Bayesian reconstruction of 3d human motion from single-camera video," in NIPS'99, 1999. [8] M. 1. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul, ''An introduction to variational methods for graphical models," in Learning in graphical models, 1998. [9] C.-J. Kim, "Dynamic linear models with markov-switching," 1. Econometrics, vol. 60, 1994. [10] U. Lerner, R. Parr, D. Koller, and G. Biswas, "Bayesian fault detection and diagnosis in dynamic systems," in Proc. AAAJ, (Austin, TX), 2000. [11] D. Morris and J. Rehg, "Singularity analysis for articulated object tracking," in CVPR, 1998. [12] K. P. Murphy, "Learning switching kalman-filter models," TR 98-10, Compaq CRL., 1998. [13] V. Pavlovic, J. M. Rehg, T.-J. Cham, and K. P. Murphy, "A dynamic bayesian network approach to figure tmcking using learned dynamic models," in Proc. Inti. Con! Computer Vision, 1999.
|
2000
|
107
|
1,762
|
New Approaches Towards Robust and Adaptive Speech Recognition Herve Bourlard, Samy Bengio and Katrin Weber IDIAP P.O. Box 592, rue du Simplon 4 1920 Martigny, Switzerland { bourlard, bengio, weber} @idiap. ch Abstract In this paper, we discuss some new research directions in automatic speech recognition (ASR), and which somewhat deviate from the usual approaches. More specifically, we will motivate and briefly describe new approaches based on multi-stream and multi/band ASR. These approaches extend the standard hidden Markov model (HMM) based approach by assuming that the different (frequency) channels representing the speech signal are processed by different (independent) "experts", each expert focusing on a different characteristic of the signal, and that the different stream likelihoods (or posteriors) are combined at some (temporal) stage to yield a global recognition output. As a further extension to multi-stream ASR, we will finally introduce a new approach, referred to as HMM2, where the HMM emission probabilities are estimated via state specific feature based HMMs responsible for merging the stream information and modeling their possible correlation. 1 Multi-Channel Processing in ASR Current automatic speech recognition systems are based on (context-dependent or context-independent) phone models described in terms of a sequence of hidden Markov model (HMM) states, where each HMM state is assumed to be characterized by a stationary probability density function. Furthermore, time correlation, and consequently the dynamic of the signal, inside each HMM state is also usually disregarded (although the use of temporal delta and delta-delta features can capture some of this correlation). Consequently, only medium-term dependencies are captured via the topology of the HMM model, while short-term and long-term dependencies are usually very poorly modeled. Ideally, we want to design a particular HMM able to accommodate multiple time-scale characteristics so that we can capture phonetic properties, as well as syllable structures and {long term) invariants that are more robust to noise. It is, however, clear that those different time-scale features will also exhibit different levels of stationarity and will require different HMM topologies to capture their dynamics. There are many potential advantages to such a multi-stream approach, including: 1. The definition of a principled way to merge different temporal knowledge sources such as acoustic and visual inputs, even if the temporal sequences are not synchronous and do not have the same data rate - see [13] for further discussion about this. 2. Possibility to incorporate multiple time resolutions (as part of a structure with multiple unit lengths, such as phon(l and syllable). 3. As a particular case of multi-stream processing, mufti-band ASR [2, 5], involving the independent processing and combination of partial frequency bands, have many potential advantages briefly discussed below. In the following, we will not discuss the underlying algorithms (more or less "complex" variants of Viterbi decoding), nor detailed experimental results (see, e.g., [4] for recent results). Instead, we will mainly focus on the combination strategy and discuss different variants arounds the same formalism. 2 Multiband-based ASR 2.1 General Formalism As a particular case of the multi-stream paradigm, we have been investigating an ASR approach based on independent processing and combination of frequency subbands. The general idea, as illustrated in Fig. 1, is to split the whole frequency band (represented in terms of critical bands) into a few subbands on which different recognizers are independently applied. The resulting probabilities are then combined for recognition later in the process at some segmental level. Starting from critical bands, acoustic processing is now performed independently for each frequency band, yielding K input streams, each being associated with a particular frequency band. ' ' !sand ' ' ' ' ' ' ' Speech Signal Spectrogram ' ' ' ' ' ' ' ------------------, ___________________________________________________ _ RecogDized Word Figure 1: Typical multiband-based ASR architecture. In multi-band speech recognition, the frequency range is split into several bands, and information in the bands is used for phonetic probability estimation by independent modules. These probabilities are then combined for recognition later in the process at some segmental level. In this case, each of the K sub-recognizer (channel) is now using the information contained in a specific frequency band Xk = { xt, x~, ... , x~, ... , x~}, where each x~ represents the acoustic (spectral) vector at time n in the k-th stream. In the case of hybrid HMM/ ANN systems, HMM local emission (posterior) probabilities are estimated by an artificial neural network (ANN), estimating P(qjlxn), where q3 is an HMM state and Xn = (x~, ... ,x~, ... ,x:f)t the feature vector at time n. In the case of multi-stream (or subband-based) HMM£ ANN systems, different ANNs will compute state specific stream posteriors P(qjJxn)· Combination ofthese local posteriors can then be performed at different temporal levels, and in many ways, including [2]: untrained linear or trained linear (e.g., as a function of automatically estimated local SNR) functions, as well as trained nonlinear functions (e.g., by using a neural network). In the simplest case, this subband posterior recombination is performed at the HMM state level, which then amounts to performing a standard Viterbi decoding in which local {log) probabilities are obtained from a linear or nonlinear combination of the local subband probabilities. For example, in the initial subband-based ASR, local posteriors P(qjJxn) were estimated according to: K P(qjJxn) = I:wkP(qjJx!,E>k) (1) k=l where, in our case, each P(qjJx!, E>k) is computed with a band-specific ANN of parameters E>k and with x~ (possibly with temporal context) at its input. The weighting factors can be assigned a uniform distribution (already performing very well [2]) or be proportional to the estimated SNR. Over the last few years, several results were reported showing that such a simple approach was usually more robust to band limited noise. 2.2 Motivations and Drawbacks The multi-band briefly discussed above has several potential advantages summarized here. Better robustness to band-limited noise- The signal may be impaired (e.g., by noise, channel characteristics, reverberation, ... ) only in some specific frequency bands. When recognition is based on several independent decisions from different frequency subbands, the decoding of a linguistic message need not be severely impaired, as long as the remaining clean sub bands supply sufficiently reliable information. This was confirmed by several experiments (see, e.g., [2]). Surprisingly, even when the combination is simply performed at the HMM state level, it is observed that the multi-band approach is yielding better performance and noise robustness than a regular full-band system. Similar conclusions were also observed in the framework of the missing feature theory [7, 9]. In this case, it was shown that, if one knows the position of the noisy features, significantly better classification performance could be achieved by disregarding the noisy data (using marginal distributions) or by integrating over all possible values of the missing data conditionally on the clean features See Section 3 for further discussion about this. Better modeling- Sub band modeling will usually be more robust. Indeed, since the dimension of each (subband) feature space is smaller, it is easier to estimate reliable statistics (resulting in a more robust parametrization). Moreover, the allpole modeling usually used in ASR will be more robust if performed on sub bands, i.e., in lower dimensional spaces, than on the full-band signal [12]. Channel asynchrony Transitions between more stationary segments of speech do not necessarily occur at the same time across the different frequency bands [8], which makes the piecewise stationary assumption more fragile. The subband approach may have the potential of relaxing the synchrony constraint inherent in current HMM systems. Channel specific processing and modeling Different recognition strategies might ultimately be applied in different subbands. For example, different time/frequency resolution tradeoff's could be chosen (e.g., time resolution and width of analysis window depending on the frequency subband). Finally, some subbands may be inherently better for certain classes of speech sounds than others. Major objections ~nd drawbacks One of the common objections [8] to this separate modeling of each frequency band has been that important information in the form of correlation between bands may be lost. Although this may be true, several studies [8], as well as the good recognition rates achieved on small frequency bands [3, 6], tend to show that most of the phonetic information is preserved in each frequency band (possibly provided that we have enough temporal information). This drawback will be fixed by the method presented next. 3 Full Combination Subband ASR If we know where the noise is, and based on the results obtained with missing data [7, 9], impressive noise robustness can be achieved by using the marginal distribution, estimating the HMM emission probability based on the clean frequency bands only. In our subband approach, we do not assume that we know, or detect explicitly, where the noise is. Following the above developments and discussions, it thus seems reasonable to integrate over all possible positions of the noisy bands, and thus to simultaneously deal with all the L = 2K possible subband combinations S~ (with i = 1, ... , L, and also including the empty set) extracted from the feature vector Xn- Introducing the hidden variable E~, representing the statistical (exclusive and mutually exhaustive) event that the feature subset S~ is "clean" (reliable), and integrating over all its possible values, we can then rewrite the local posterior probability as: L P(qilxn,E>) = :EP(qj,E~Ixn,E>) £=1 L = I: P(qj IE~, Xn, E>.e)P(E~Ixn) £=1 L I: P(qj IS~, E>.e)P(E~Ixn) (2) £=1 where P(E~Ixn) represents the relative reliability of a specific feature set. E> represents the whole parameter space, while E>.e denotes the set of (ANN) parameters used to compute the subband posteriors. Typically, training of the L neural nets would be done once and for all on clean data, and the recognizer would then be adapted on line simply by adjusting the weights P(E~Ixn) (still representing a limited set of L parameters) to increase the global posteriors. This adaptation can be performed by online estimation of the signal-to-noise ratio or by online, unsupervised, EM adaptation. While it is pretty easy to quickly estimate any subband likelihood or marginal distribution when working with Gaussian or multi-Gaussian densities [7], straigh implementation of (2) is not always tractable since it requires the use (and training) of L neural networks to estimate all the posteriors P(q3IS~,E>.e). However, it has the advantage of not requiring the subband independence assumption [3]. An interesting approximation to this "optimal" solution though consists in simply using the neural nets that are available (K of them in the case of baseline sub band ASR) and, re-introducing the independence assumption, to approximate all the other subband combination probabilities in (2), as follows [3, 4): P( ·IS£ e) =P( ·)II P(qilx~,ek) q3 n• I. q3 P( ·) kESL qJ (3) Experimental results obtained from this Full. Combination approach in different noisy conditions are reported in [3, 4), where the performance of this above approximation was also compared to the "optimal" estimators (2). Interestingly, it was shown that this independence assumption did not hurt much and that the resulting recognition performance was similar to the performance obtained by training and recombining all possible L nets (and significantly better than the original subband approach). In both cases, the recognition rate and the robustness to noise were greatly improved compared to the initial subband approach. This further confirms that we do not seem to lose "critically" important information when neglecting the correlation between bands. In the next section, we biefly introduced a further extension of this approach where the segmentation into subbands is no longer done explicitly, but is achieved dynamically over time, and where the integration over all possible frequency segmentation is part of the same formalism. 4 HMM2: Mixture of HMMs HMM emission probabilities are typically modeled through Gaussian mixtures or neural networks. We propose here an alternative approach, referred to as HMM2, integrating standard HMMs (referred to as ''temporal HMMs") with state-dependent feature-based HMMs (referred to as ''feature HMMs") responsible for the estimation of the emission probabilities. In this case, each feature vector Xn at time n is considered as a fixed length sequence, which has supposedly been generated by a temporal HMM state specific HMM for which each state is emitting individual feature components that are modeled by, e.g., one dimensional Gaussian mixtures. The feature HMM thus looks at all possible subband segmentations and automatically performs the combination of the likelihoods to yield a single emission probability. The resulting architecture is illustrated in Figure 2. In this example, the HMM2 is composed of an HMM that handle sequences of features through time. This HMM is composed of 3 left-to-right connected states (q1, q2 and q3) and each state emits a vector of features at each time step. The particularity of an HMM2 is that each state uses an HMM to emit the feature vector, as if it was an ordered sequence (instead of a vector). In Figure 2, state q2 contains a feature HMM with 4 states connected top-down. Of course, while the temporal HMM usually has a left-to-right structure, the topology of the feature HMM can take many forms, which will then reflect the correlation being captured by the model. The feature HMM could even have more states than feature components, in which case "high-order" correlation information could be extracted. In [1), an EM algorithm to jointly train all the parameters of such HMM2 in order to maximize the data likelihood has been derived. This derivation was based on the fact that an HMM2 can be considered as a mixture of mixtures of distributions. We believe that HMM2 (which includes the classical mixture of Gaussian HMMs as a particular case) has several potential advantages, including: 1. Better feature correlation modeling through the feature-based (frequency) HMM topology. Also, the complexity of this topology and the probability density function associated with each state easily control the number of parameters. 2. Automatic non-linear spectral warping. In the same way the conventional HMM does time warping and time integration, the feature-based HMM performs frequency warping and frequency integration. 3. Dynamic formant trajectory modelling. As further discussed below, the HMM2 structure has the potential to extract some relevant formant structure information, which is often considered as important to robust speech recognition. To illustrate the last point and its relationship with dynamic multi-band ASR, the HMM2 models was used in [14] to extract formant-like information. All the parameters of HMM2 models were trained according to the above EM algorithm on delta-frequency features (differences of two consecutive log Rasta PLP coefficients). The feature HMM had a simple top-down topology with 4 states. After training, Figure 3 shows (on unseen test data) the value of the features for the phoneme iy as well as the segmentation found by a Viterbi decoding along the delta-frequency axis (the thick black lines). At each time step, we kept the 3 positions where the deltafrequency HMM changed its state during decoding (for instance, at the first time frame, the HMM goes from state 1 to state 2 after the third feature). We believe they contain formant-like information. In [14], it has been shown that the use of that information could significantly enhance standard speech recognition systems. Time Figure 2: An HMM2: the emission distributions of the HMM are estimated by another HMM. Acknowledgments Figure 3: Frequency deltas of log Rasta PLP and segmentation for an example of phoneme iy. The content and themes discussed in this paper largely benefited from the collaboration with our colleagues Andrew Morris, Astrid Hagen and Herve Glotin. This work was partly supported by the Swiss Federal Office for Education and Science (FOES) through the European SPHEAR (TMR, Training and Mobility of Researchers) and RESPITE (ESPRIT Long term Research) projects. Additionnally, Katrin Weber is supported by a the Swiss National Science Foundation project MULTICHAN. References [l] Bengio, S., Bourlard, H., and Weber, K., "An EM Algorithm for HMMs with Emission Distributions Represented by HMMs," IDIAP Research Report, IDIAP-RR00-11, Martigny, Switzerland, 2000. [2] Bourlard, H. and Dupont, S., "A new ASR approach based on independent processing and combination of partial frequency bands," Proc. of Intl. Conf. on Spoken Language Processing (Philadelphia), pp. 422-425, October 1996. [3] Hagen, A., Morris, A., Bourlard, H., "Subband-based speech recognition in noisy conditions: The full combination approach," IDIAP Research Report no. IDIAP-RR-98-15, 1998. [4] Hagen, A., Morris, A., Bourlard, H., "Different weighting schemes in the full combination subbands approach for noise robust ASR," Proceedings of the Workshop on Robust Methods for Speech Recognition in Adverse Conditions (Tampere, Finland), May 25-26, 1999. [5] Hermansky, H., Pavel, M., and Tribewala, S., "Towards ASR using partially corrupted speech," Proc. of Intl. Conf. on Spoken Language Processing (Philadelphia), pp. 458-461, October 1996. [6] Hermansky, H. and Sharma, S., "Temporal patterns (TRAPS) in ASR noisy speech," Proc. of the IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing (Phoenix, AZ), pp. 289-292, March 1999. [7] Lippmann, R.P., Carlson, B.A., "Using missing feature theory to actively select features for robust speech recognition with interruptions, filtering and noise," Proc. Eurospeech '97 (Rhodes, Greece, September 1997), pp. KN37-40. [8] Mirghafori, N. and Morgan, N., "Transmissions and transitions: A study of two common assumptions in multi-band ASR," Intl. IEEE Conf. on Acoustics, Speech, and Signal Processing, (Seattle, WA, May 1997), pp. 713-716. [9] Morris, A.C., Cooke, M.P., and Green, P.D., "Some solutions to the missing features problem in data classification, with application to noise robust ASR," Proc. Intl. Conf on Acoustics, Speech, and Signal Processing, pp. 737-740,1998. [10] Morris, A.C., Hagen, A., Bourlard, H., "The full combination subbands approach to noise robust HMM/ ANN-based ASR," Proc. of Eurospeech '99 (Budapest, Sep. 99). [11] Okawa, S., Bocchieri, E., Potamianos, A., "Multi-band speech recognition in noisy environment," Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing, 1998. [12] Rao, S. and Pearlman, W.A., "Analysis of linear prediction, coding, and spectral estimation from subbands," IEEE Irans. on Information Theory, vol. 42, pp. 116Q-1178, July 1996. [13] Tomlinson, M.J., Russel, M.J., Moore, R.K., Bucklan, A.P., and Fawley, M.A., "Modelling asynchrony in speech using elementary single-signal decomposition," Proc. of IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing (Munich), pp. 1247-1250, April 1997. [14] Weber, K., Bengio, S., and Bourlard, H., "HMM2-Extraction of Formant Features and their Use for Robust ASR," IDIAP Research Report, IDIAP-RR0042, Martigny, Switzerland, 2000.
|
2000
|
108
|
1,763
|
Place Cells and Spatial Navigation based on 2d Visual Feature Extraction, Path Integration, and Reinforcement Learning A. Arleo* F. Smeraldi S. Hug W. Gerstner Centre for Neuro-Mimetic Systems, MANTRA Swiss Federal Institute of Technology Lausanne, CH-1015 Lausanne EPFL, Switzerland Abstract We model hippocampal place cells and head-direction cells by combining allothetic (visual) and idiothetic (proprioceptive) stimuli. Visual input, provided by a video camera on a miniature robot, is preprocessed by a set of Gabor filters on 31 nodes of a log-polar retinotopic graph. Unsupervised Hebbian learning is employed to incrementally build a population of localized overlapping place fields. Place cells serve as basis functions for reinforcement learning. Experimental results for goal-oriented navigation of a mobile robot are presented. 1 Introduction In order to achieve spatial learning, both animals and artificial agents need to autonomously locate themselves based on available sensory information. Neurophysiological findings suggest the spatial self-localization of rodents is supported by place-sensitive and directionsensitive cells. Place cells in the rat Hippocampus provide a spatial representation in allocentric coordinates [1]. A place cell exhibits a high firing rate only when the animal is in a specific region of the environment, which defines the place field of the cell. Head-direction cells observed in the hippocampal formation encode the animal's allocentric heading in the azimuthal plane [2]. A directional cell fires maximally only when the animal's heading is equal to the cell's preferred direction, regardless of the orientation of the head relative to the body, of the rat's location, or of the animal's behavior. We ask two questions. (i) How do we get place fields from visual input [3]? This question is non-trivial given that visual input depends on the direction of gaze. We present a computational model which is consistent with several neurophysiological findings concerning biological head-direction and place cells. Place-coding and directional sense are provided by two coupled neural systems, which interact with each other to form a single substrate for spatial navigation (Fig. lea»~ . Both systems rely on allothetic cues (e.g., visual stimuli) as well as idiothetic signals (e.g., proprioceptive cues) to establish stable internal representations. The resulting representation consists of overlapping place fields with properties similar to those of hippocampal place cells. (iiJ What's the use of place cells for navigation ·Corresponding author, angeio.arleo@epji.ch Idiothctic Stimuli Motor ~_N _A---"/--~ Commands (3) IdiothctlC Stimult (b) Figure 1: (a) An overview of the entire system. Dark grey areas are involved in space representation, whereas light grey components form the head-direction circuit. Glossary: SnC: hypothetical snapshot cells, sLEC: superficial lateral entorhinal cortex, sMEC: superficial medial entorhinal cortex, DG: dentate gyrus, CA3-CAl: hippocampus proper, NA: nucleus accumbens, VIS: visual bearing cells, CAL: hypothetical calibration cells, HAV: head angular velocity cells, PSC: postsubiculum, ADN: anterodorsal nucleus, LMN: lateral mammillary nuclei. (b) A visual scene acquired by the robot during spatial learning. The image resolution is 422 x 316 pixels. The retinotopic sampling grid (white crosses) is employed to sample visual data by means of Gabor decomposition. Black circles represent maximally responding Gabor filters (the circle radius varies as a function of the filter's spatial frequency). [I]? We show that a representation by overlapping place fields is a natural "state space" for reinforcement learning. A direct implementation of reinforcement learning on real visual streams would be impossible given the high dimensionality of the visual input space. A place field representation extracts the low-dimensional view manifold on which efficient reinforcement learning is possible. To validate our model in real task-environment contexts, we have tested it on a Khepera miniature mobile robot. Visual information is supplied by an on-board video camera. Eight infrared sensors provide obstacle detection and measure ambient light. Idiothetic signals are provided by the robot's dead-reckoning system. The experimental setup consists of an open-field square arena of about 80 x 80 cm in a standard laboratory background (Fig. 1 (b». The vision-based localization problem consists of (i) detecting a convenient lowdimensional representation of the continuous high-dimensional input space (images have a resolution of 422 x 316 pixels), (ii) learning the mapping function from the visual sensory space to points belonging to this representation. Since our robot moves on a twodimensional space with a camera pointing in the direction of motion, the high-dimensional visual space is not uniformly filled. Rather, all input data points lie on a low-dimensional surface which is embedded in an Euclidean space whose dimensionality is given by the total number of camera pixels. This low-dimensional description of the visual space is referred to as view manifold [4]. 2 Extracting the low-dimensional view manifold Hippocampal place fields are determined by a combination of highly-processed multimodal sensory stimuli (e.g., visual, auditory, olfactory, and somatosensory cues) whose mutual relationships code for the animal's location [1]. Nevertheless, experiments on rodents suggest that vision plays an eminent role in determining place cell activity [5]. Here, we focus on the visual pathway, and we propose a processing in four steps. As a first step, we place a retinotopic sampling grid on the image (Fig. I(b». In total we have 31 grid points with high resolution only in a localized region of the view field (fovea), whereas peripheral areas are characterized by a low-resolution vision [6]. At each point of the grid we place 24 Gabor filters with different orientations and amplitudes. Gabor filters [7] provide a suitable mathematical model for biological simple cells [8]. Specifically, we employ a set of modified Gabor filters [9]. A modified Gabor filter Ii, tuned to orientation cPj and angular frequency WI = e/;l, corresponds to a Gaussian in the Log-polar frequency plane rather than in the frequency domain itself, and is defined by the Fourier function G(~ , cP) = A · e- (/; -/;; )2 /2a~ . e- (¢-¢tl 2 / 2a; (1) where A is a normalization term, and (~, cP) are coordinates in the Log-polar Fourier plane (~, cP ) = (logll (wx, wy) 11, arctan(wy/wx )) (2) A key property of the Log-polar reference frame is that translations along cP correspond to rotations in the image domain, while translations along ~ correspond to scaling the image. In our implementation, we build a set of 24 modified Gabor filters,:F = {fi(WI , cPj) 11 :::; l :::; 3, 1 :::; j :::; 8}, obtained by taking 3 angular frequencies WI , W2 , W3 , and 8 orientations cPI, ... , cP8. As a second step, we take the magnitude of the responses of these Gabor filters for detecting visual properties within video streams. While the Gabor filter itself has properties related to simple cells, the amplitude of the complex response does not depend on the exact position within the receptive field and has therefore properties similar to cortical complex cells. Thus, given an image I(x, V), we compute the magnitude of the response of all Ji filters for each retinal point § r,(!i) ~ ( (~&(f, (X)) lUi +X))' + (~lm(f, (X)) I (.Ii +X))' Y (3) where if varies over the area occupied by the filter Ji in the spatial domain. The third step within the visual pathway of our model, consists of interpreting visual cues by means of neural activity. We take a population of hypothetical snapshot cells (SnC in Fig. I(a» one synapse downstream from the Gabor filter layer. Let k be an index over all K filters forming the retinotopic grid. Given a new image I, a snapshot cell S E SnC is created which receives afferents from all /k filters. Connections from filters Jk to cell s are initialized according to W s k = rk, Vk E K. If, at a later point, the robot sees an image I', the firing activity r s of cell s E SnC is computed by - (-k L h- W skl )2 / 2a2 (4) rs - e k where rk are the Gabor filter responses to image I'. Eq. 4 defines a radial basis function in the filter space that measures the similarity of the current image to the image stored in the weights W sk. The width a determines the discrimination capacity of the system for visual scene recognition. As final step, we apply unsupervised Hebbian learning to achieve spatial coding one synapse downstream from the SnC layer (sLEC in Fig. I(a». Indeed, the snapshot cell activity rs defined in Eq. 4 depends on the robot's gaze direction, and does not code for a spatial location. In order to collect information from several gaze directions, the robot takes four snapshots corresponding to north, east, south, and west views at each location visited during exploration. To do this, it relies on the allocentric compass information provided by the directional system [2, 10]. For each visited location the robot creates four SnC snapshot cells, which are bound together to form a place cell in the sLEC layer. Thus, sLEC cell activity depends on a combination of several visual cues, which results in non-directional place fields (Fig. 2(a» [11]. 00 00 Figure 2: (a) A sample of spatial receptive field for a sLEC cell in our model. The lighter a region, the higher the cell's firing rate when the robot is in that region of the arena. (b) A typical place field in the CA3-CAllayer of the model. 3 Hippocampal CAI-CA3 place field representation When relying on visual data only, the state space representation encoded by place cells does not fulfill the Markov hypothesis [12]. Indeed, distinct areas of the environment may provide identical visual cues and lead to singularities in the view manifold (sensory input aliasing). We employ idiothetic signals along with visual information in order to remove such singularities and solve the hidden-state problem. An extra-hippocampal path integrator drives Gaussian-tuned neurons modeling self-motion information (sMEC in Fig. l(a». A fundamental contribution to build the sMEC idiothetic space representation comes from head-direction cells (projection B in Fig. l(a». As the robot moves, sMEC cell activity changes according to self-motion signals and to the current heading of the robot as estimated by the directional system. The firing activity T m of a cell m E sMEC is given by Tm = exp( - (Sdr - sm)2/2(J2), where Sdr is the robot's current position estimated by dead-reckoning, sm is the center of the receptive field of cell m, and (J is the width of the Gaussian field. Allothetic and idiothetic representations (i.e., sLEC and sMEC place field representations, respectively) converge onto CA3-CAI regions to form a stable spatial representation (Fig. l(a» . On the one hand, unreliable visual data are compensated for by means of path integration. On the other hand, reliable visual information can calibrate the path integrator system and maintain the dead-reckoning error bounded over time. Correlational learning is applied to combine visual cues and path integration over time. CA3-CA 1 cells are recruited incrementally as exploration proceeds. For each new location, connections are established from all simultaneously active cells in sLEC and sMEC to newly recruited CA3-CA1 cells. Then, during the agent-environment interaction, Hebbian learning is applied to update the efficacy of the efferents from sLEC and sMEC to the hippocampus proper [11]. After learning, the CA3-CA1 space representation consists of a population of localized overlapping place fields (Fig. 2(b» covering the two-dimensional workspace densely. Fig. 3(a) shows an example of distribution of CA3-CA1 place cells after learning. In this experiment, the robot, starting from an empty population, recruited about 1000 CA3-CA 1 place cells. In order to interpret the information represented by the ensemble CA3-CA 1 pattern of activity, we employ population vector coding [13, 14]. Let s be the unknown robot's location, Ti (S) the firing activity of a CA3-CA 1 place cell i, and Si the center of its place field. The population vector p is given by the center of mass of the network activity: p = Li Si Ti(S)/ L i Ti(S). The approximation p ~ S is good for large neural populations covering the environment densely and uniformly [15]. In Fig. 3(a) the center of mass coding for the robot's location is represented by the black cross. (a) ,'\ ,--------------, \,------------"\,--------------" ,-------------t r , .................. _ ...... __ .................. ___ _ 1 1 1 1 1 1 1 1 \ f, , __ , ................ ,' ____ _ \ \ '\ '\ '\ '\ '\ '\ '\ ...... ---, \ '\ ...... '\ ............ '\ .... ............ --1 , \ , \ '\ '\ '\ ..... ...... ___ _ 1 I I' , \ I, 1 I I 1 __ ,_ 1 , 1 \ I , ttl , __ '_11111 ", _,1_ , ___ I' , , __ " , __ , ________ t .... ..... ___ .... 1 ________ ! 111_ 1_, _ , , ___ 1 _ 1 1 I " _ " , " " /' f I , ___ , , t ~ """ _ , ... (b) Figure 3: (a) The ensemble activity of approximately 1000 CA3-CAI place cells created by the robot during spatial learning. Each dot is the center of a CA3-CA 1 place cell. The lighter a cell, the higher its firing rate. The black cross is the center of mass of the ensemble activity. (b) Vector field representation of a navigational map learned after 5 trials. The target area (about 2.5 times the area occupied by the robot) is the upper-left corner of the arena. 4 Action learning: Goal-oriented navigation The above spatial model enables the robot to localize itself within the environment. To support cognitive spatial behavior [1], the hippocampal circuit must also allow the robot to learn navigational maps autonomously. Our CA3-CAI population provides an incrementally learned coarse coding representation suitable for applying reinforcement learning for continuous high-dimensional state spaces. Learning an action-value function over a continuous location space endows the system with spatial generalization capabilities. We apply a Q(),) learning scheme [16] to build navigational maps [17, 18]. The overlapping localized CA3-CA 1 place fields provide a natural set of basis functions that can be used to learn a parameterized form of the Q(),) function [19]. Note that we do not have to choose parameters like width and location of the basis functions. Rather, the basis functions are created automatically by unsupervised learning. Our representation also solves the problem of ambiguous input or partially hidden states [12], therefore the current state is fully known to the system and reinforcement learning can be applied in a straightforward manner. Let ri denote the activation of a CA3-CAI place cell i. Each state s is encoded by the ensemble place cell activity vector f(8) = (rl (8) , rds), ... ,rn (8)), where n is the number of created place cells. The state-action value function Q w (s, a) is of the form n Qw(s,a) = (ura)T f(S) = Lwi'ri(S) (5) i=l where S, a is the state-action pair, and ura = (w'l , ... , w~) is an adjustable parameter vector. The learning task consists of updating the weight vector ura to approximate the optimal function Q~(s, a). The state-value prediction error is defined by St = R t+l + 'Y max Qt (St+l , a) - Qt(St, at) (6) a where Rt+l is the immediate reward, and 0 ::::; 'Y ::::; 1 is a constant discounting factor. At each time step the weight vector ura changes according to (7) I ', ,~ __ ----------, , ',"'---------- " , ,'--------- '" \,\ " ,"-------, " \ \"' ,','-------" \ \ \"" ," --""" I I I II '22 ' , , \ , , , I I I I \ \ I , I 1\ \ I I I I \ \ , " \ \ I I I I I , \ \ , , I I I I I f , I ". , , f f I I I , I I I I I , I I I I I \ "- , \ , \ ! \ I I 1I 1\,11\"\\,,, I I I1 11"",1 " ", tIll """,,\ , 1\, 1111 1",\11,1\", 1 111 ",, \ 1\1\'\ ' , (a) " ------------ - -, , '--------------" , --------------, , , ... - - - - -- - - - - - , , , ... , -- - - -- - - - - -\ , , , - - - - --- - - -" ~ ~o ' ' , , --, t \ \ \ , ... .... ..... t, , \ ...... .... _ II \ '1 / /' _ I I I _ _ _ //_ 'I \ _____ / // _ II \ ' , _______ / / __ tIl \, , ______ --- __ I 1\' , , __ __ ___ - __ _ \ \'" ------- - - - - \ \ " - - - - - - - - - - - (b) Figure 4: Two samples of learned navigational maps. The obstacle (dark grey object) is "transparent" with respect to vision, while it is detectable by the robot's infrared sensors. (a) The map learned by the robot after 20 training paths. (b) The map learned by the robot after 80 training trials. where 0 ::; ex ::; 1 is a constant learning rate parameter, and et is the eligibility trace vector. During learning, the exploitation-exploration trade-off is determined by an f-greedy policy, with 0 ::; f ::; 1. As a consequence, at each step t the agent might either behave greedily (exploitation) with probability 1 - f, by selecting the best action a; with respect to the Q-value functions, a; = argmaxaQt (St , a) , or resort to uniform random action selection (exploration) with probability equal to f. The update of the eligibility trace depends on whether the robot selects an exploratory or an exploiting action. Specifically, the vector et changes according to (we start with eo = 0) - _ -( - ) + { 'Y >..et- l if exploiting (8) et r St 0 if exploring where 0 ::; >.. ::; 1 is a trace-decay parameter [19], and f( Sf) is the CA3-CA 1 vector activity. Learning consists of a sequence of training paths starting at random positions and determined by the f-greedy policy. When the robot reaches the target, a new training path begins at a new random location. Fig. 3(b) shows an example of navigational map learned after 5 training trials. Fig. 4 shows some results obtained by adding an obstacle within the arena after the place field representation has been learned. Map of Fig. 4(a) has been learned after 20 training paths. It contains proper goal-oriented information, whereas it does not provide obstacle avoidance accuratelyl. Fig. 4(b) displays a navigational map learned by the robot after 80 training paths. Due to longer training, the map provides both appropriate goal-oriented and obstacle avoidance behavior. The vector field representations of Figs. 3(b) and 4 have been obtained by rastering uniformJy over the environment. Many of sampled locations were not visited by the robot during training, which confirms the generalization capabilities of the method. That is, the robot was able to associate appropriate goal-oriented actions to never experienced spatial positions. Reinforcement learning takes long training time when applied directly on high-dimensional input spaces [19]. We have shown that by means of an appropriate state space representation, based on localized overlapping place fields, the robot can learn goal-oriented behavior after only 5 training trials (without obstacles). This is similar to the escape platform learning time of rats in Morris water-maze [20]. INote that this does not really impair the robot's goal-oriented behavior, since obstacle avoidance is supported by a low-level reactive module driven by infrared sensors. Acknowledgments Supported by the Swiss National Science Foundation, project nr. 21-49174.96. References [1] J. O'Keefe and L. Nadel. The Hippocampus as a cognitive map. Clarendon Press, Oxford, 1978. [2] J.S. Taube, R.I. Muller, and J.B.Jr. Ranck. Head direction cells recorded from the postsubiculum in freely moving rats. 1. Description and quantitative analysis. Journal of Neuroscience, 10:420435,1990. [3] J. O'Keefe and N. Burgess. Geometric determinants of the place fields of hippocampal neurons. Nature, 381:425-428, 1996. [4] M.O. Franz, B. Schtilkopf, H.A. Mallot, and H.H. Billthoff. Learning View Graphs for Robot Navigation. Autonomous Robots, 5:111-125,1998. [5] J.J. Knierim, H.S. Kudrimoti, and B.L. McNaughton. Place cells, head direction cells, and the learning oflandmark stability. Journal of Neuroscience, 15:1648- 1659, 1995. [6] F. Smeraldi, J. Biglin, and W. Gerstner. On the role of dimensionality in face authentication. In Proceedings of the Symposium of the Swedish Society for Automated Image Analysis, Halmstad (Sweden), pages 87-91. Halmstad University, Sweden, 2000. [7] D. Gabor. Theory of communication. Journal of the lEE, 93:429-457, 1946. [8] J.G. Daugman. Two-dimensional spectral analysis of cortical receptive field profiles. Vision Research, 20:847-856, 1980. [9] F. Smeraldi, N. Capdevielle, and J. Biglin. Facial features detection by saccadic exploration of the Gabor decomposition and support vector machines. In Proceedings of the 11 th Scandinavian Conference on Image Analysis - SCIA 99, Kangerlussuaq, Greenland, pages 39-44, 1999. [10] A. Arleo and W. Gerstner. Modeling rodent head-direction cells and place cells for spatial learning in bio-mimetic robotics. In J.-A. Meyer, A. Berthoz, D. Floreano, H. Roitblat, and S.w. Wilson, editors, From Animals to Animats VI, pages 236- 245, Cambridge MA, 2000. MIT Press. [11] A. Arleo and W. Gerstner. Spatial cognition and neuro-mimetic navigation: A model of hippocampal place cell activity. Biological Cybernetics, Special Issue on Navigation in Biological and Artificial Systems, 83:287- 299, 2000. [12] R.A. McCallum. Hidden state and reinforcement learning with instance-based state identification. IEEE Systems, Man, and Cybernetics, 26(3):464-473, 1996. [13] A.P. Georgopoulos, A. Schwartz, and R.E. Kettner. Neuronal population coding of movement direction. Science, 233:1416-1419, 1986. [14] M.A. Wilson and B.L. McNaughton. Dynamics of the hippocampal ensemble code for space. Science, 261 :1055-1058, 1993. [15] E. Salinas and L.F. Abbott. Vector reconstruction from firing rates. Journal of Computational Science, 1:89- 107, 1994. [16] C.J.C.H. Watkins. Learning from delayed rewards. PhD thesis, University of Cambridge, England, 1989. [17] P. Dayan. Navigating through temporal difference. In R.P. Lippmann, J.E. Moody, and D.S. Touretzky, editors, Advances in Neural Information Processing Systems 3, pages 464-470. Morgan Kaufmann, San Mateo, CA, 1991. [18] D.J. Foster, R.G.M. Morris, and P. Dayan. A model of hippocampally dependent navigation, using the temporal difference learning rule. Hippocampus, 10(1):1-16,2000. [19] R.S. Sutton and A.G. Barto. Reinforcement learning, an introduction. MIT Press-Bradford Books, Cambridge, Massachusetts, 1998. [20] R.G.M. Morris, P. Ganud, 1.N.P. Rawlins, and 1. O'Keefe. Place navigation impaired in rats with hippocampal lesions. Nature, 297:681- 683,1982.
|
2000
|
109
|
1,764
|
Exact Solutions to Time-Dependent MDPs Justin A. Boyan· ITA Software Building 400 One Kendall Square Cambridge, MA 02139 jab@itasoftware.com Michael L. Littman AT&T Labs-Research and Duke University 180 Park Ave. Room A275 Florham Park, NJ 07932-0971 USA mlittman@research.att. com Abstract We describe an extension of the Markov decision process model in which a continuous time dimension is included in the state space. This allows for the representation and exact solution of a wide range of problems in which transitions or rewards vary over time. We examine problems based on route planning with public transportation and telescope observation scheduling. 1 Introduction Imagine trying to plan a route from home to work that minimizes expected time. One approach is to use a tool such as "Mapquest", which annotates maps with information about estimated driving time, then applies a standard graph-search algorithm to produce a shortest route. Even if driving times are stochastic, the annotations can be expected times, so this presents no additional challenge. However, consider what happens if we would like to include public transportation in our route planning. Buses, trains, and subways vary in their expected travel time according to the time of day: buses and subways come more frequently during rush hour; trains leave on or close to scheduled departure times. In fact, even highway driving times vary with time of day, with heavier traffic and longer travel times during rush hour. To formalize this problem, we require a model that includes both stochastic actions, as in a Markov decision process (MDP), and actions with time-dependent stochastic durations. There are a number of models that include some of these attributes: • Directed graphs with shortest path algorithms [2]: State transitions are deterministic; action durations are time independent (deterministic or stochastic). • Stochastic Time Dependent Networks (STDNS) [6]: State transitions are deterministic; action durations are stochastic and can be time dependent. • Markov decision processes (MDPS) [5]: State transitions are stochastic; action durations are deterministic. • Semi-Markov decision processes (SMDPS) [5]: State transitions are stochastic; action durations are stochastic, but not time dependent. -The work reported here was done while Boyan's affiliation was with NASA Ames Research Center, Computational Sciences Division. In this paper, we introduce the Time-Dependent MDP (TMDP) model, which generalizes all these models by including both stochastic state transitions and stochastic, time-dependent action durations. At a high level, a TMDP is a special continuousstate MDP [5; 4] consisting of states with both a discrete component and a real-valued time component: (x, t) E X x lR. With absolute time as part of the state space, we can model a rich set of domain objectives including minimizing expected time, maximizing the probability of making a deadline, or maximizing the dollar reward of a path subject to a time deadline. In fact, using the time dimension to represent other one-dimensional quantities, TMDPS support planning with non-linear utilities [3] (e.g., risk-aversion), or with a continuous resource such as battery life or money. We define TMDPs and express their Bellman equations in a functional form that gives, at each state x, the one-step lookahead value at (x, t) for all times in parallel (Section 2). We use the term time-value function to denote a mapping from realvalued times to real-valued future reward. With appropriate restrictions on the form of the stochastic state-time transition function and reward function, we guarantee that the optimal time-value function at each state is a piecewise linear function of time, which can be represented exactly and computed by value iteration (Section 3). We conclude with empirical results on two domains (Section 4). 2 General model J.11 Missed the 8am (rdID 7 8 910 12 2 01234567 REL J.L3 Highway - rush hour IQ IIII L3 1At-lll l{ 7 8 910 12 2 0 1 2 3 4 5 6 7 REL ~4 HIghway - off peak ;\ I~ L4 14"011111 ~ 7 8 910 12 2 0 1 2 3 4 5 6 7 REL ~l J.12 Caught the 8am tram ~I I I I I I I L2 1 I ft. 1 I I I Pz 7 8 910 12 2 ABS J.i5 Dnve on backroad I I II I II I Ls I I I 11111 Ps 7 8 9 10 12 2 0 1 2 3 4 5 6 7 REL lSl~ 11 78 9 10 122 Vz ~ III 7 8 910 12 2 Figure 1: An illustrative route-planning example TMDP. Figure 1 depicts a small route-planning example that illustrates several distinguishing features of the TMDP model. The start state Xl corresponds to being at home. From here, two actions are available: a1, taking the 8am train (a scheduled action); and a2, driving to work via highway then backroads (may be done at any time). Action a1 has two possible outcomes, represented by III and 1l2· Outcome III ("Missed the 8am train") is active after 7:50am, whereas outcome 112 ("Caught the train") is active until 7:50am; this is governed by the likelihood functions L1 and L2 in the model. These outcomes cause deterministic transitions to states Xl and X3, respectively, but take varying amounts of time. Time distributions in a TMDP may be either "relative" (REL) or "absolute" (ABS). In the case of catching the train (1l2), the distribution is absolute: the arrival time (shown in P2) has mean 9:45am no matter what time before 7:50am the action was initiated. (Boarding the train earlier does not allow us to arrive at our destination earlier!) However, missing the train and returning to Xl has a relative distribution: it deterministically takes 15 minutes from our starting time (distribution P1) to return home. The outcomes for driving (a2) are Jl3 and Jl4. Outcome Jl3 ("Highway - rush hour") is active with probability 1 during the interval 8am- 9am, and with smaller probability outside that interval, as shown by L3. Outcome Jl4 ("Highway - off peak") is complementary. Duration distributions P3 and P4 , both relative to the initiation time, show that driving times during rush hour are on average longer than those off peak. State X2 is reached in either case. From state X2, only one action is available, a3. The corresponding outcome Jl5 ("Drive on backroad") is insensitive to time of day and results in a deterministic transition to state X3 with duration 1 hour. The reward function for arriving at work is + 1 before 11am and falls linearly to zero between 11am and noon. The solution to a TMDP such as this is a policy mapping state-time pairs (x, t) to actions so as to maximize expected future reward. As is standard in MDP methods, our approach finds this policy via the value function V*. We represent the value function of a TMDP as a set of time-value functions, one per state: ~(t) gives the optimal expected future reward from state Xi at time t. In our example of Figure 1, the time-value functions for X3 and X2 are shown as Va and V2. Because of the deterministic one-hour delay of Jl5, V2 is identical to V3 shifted back one hour. This wholesale shifting of time-value functions is exploited by our solution algorithm. The TMDP model also allows a notion of "dawdling" in a state. This means the TMDP agent can remain in a state for as long as desired at a reward rate of K(x, t) per unit time before choosing an action. This makes it possible, for example, for an agent to wait at home for rush hour to end before driving to work. Formally, a TMDP consists of the following components: X discrete state space A discrete action space M discrete set of outcomes, each of the form Jl = (x~,TIt,PIt): x~ EX: the resulting state Tit E {ABS, REL}: specifies the type of the resulting time distribution PIt(t') (if Tit = ABS): pdf over absolute arrival times of Jl PIt (8) (if Tit = REL): pdf over durations of Jl L L(Jllx, t, a) is the likelihood of outcome Jl given state x, time t, action a R R(Jl, t, 8) is the reward for outcome Jl at time t with duration 8 K K(x, t) is the reward rate for "dawdling" in state x at time t. We can define the optimal value function for a TMDP in terms of these quantities with the following Bellman equations: V(x, t) V(x, t) Q(x, t,a) U(Jl, t) t' sup (r K(x,s)ds+V(x,t')) t'?t it = max Q(x,t,a) aEA = L L(Jllx, a, t) . U(Jl, t) itEM value function (allowing dawdling) value function (immediate action) expected Q value over outcomes = { f~ PIt(t') [R(Jl, t, t' - t) + V(x~, t'))dt' f~ PIt(t' - t)[R(Jl, t, t' - t) + V(x~, t'))dt' (if Tit = ABS) (if Tit = REL). These equations follow straightforwardly from viewing the TMDP as an undiscounted continuous-time MDP. Note that the calculations of U(Jl, t) are convolutions of the result-time pdf P with the lookahead value R + V. In the next section, we discuss a concrete way of representing and manipulating the continuous quantities that appear in these equations. 3 Model with piecewise linear value functions In the general model, the time-value functions for each state can be arbitrarily complex and therefore impossible to represent exactly. In this section, we show how to restrict the model to allow value functions to be manipulated exactly. For each state, we represent its time-value function Vi(t) as a piecewise linear function of time. Vi(t) is thus represented by a data structure consisting of a set of distinct times called breakpoints and, for each pair of consecutive breakpoints, the equation of a line defined over the corresponding interval. Why are piecewise linear functions an appropriate representation? Linear timevalue functions provide an exact representation for minimum-time problems. Piecewise time-value functions provide closure under the "max" operator. Rewards must be constrained to be piecewise linear functions of start and arrival times and action durations. We write R(p" t, 8) = Rs(p" t) + Ra(P" t + 8) + Rd(p" 8) where Rs, Ra, and Rd are piecewise linear functions of start time, arrival time, and duration, respectively. In addition, the dawdling reward K and the outcome probability function L must be piecewise constant. The most significant restriction needed for exact computation is that arrival and duration pdfs be discrete. This ensures closure under convolutions. In contrast, convolving a piecewise constant pdf (e.g., a uniform distribution) with a piecewise linear time-value function would in general produce a piecewise quadratic timevalue function; further convolutions increase the degree with each iteration of value iteration. In Section 5 below we discuss how to relax this restriction. Given the restrictions just mentioned, all the operations used in the Bellman equations from Section 2- namely, addition, multiplication, integration, supremum, maximization, and convolution- can be implemented exactly. The running time of each operation is linear in the representation size of the time-value functions involved. Seeding the process with an initial piecewise linear time-value function, we can carry out value iteration until convergence. In general, the running time from one iteration to the next can increase, as the number of linear "pieces" being manipulated grows; however, the representations grow only as complex as necessary to represent the value function V exactly. 4 Experimental domains We present results on two domains: transportation planning and telescope scheduling. For comparison, we also implemented the natural alternative to the piecewiselinear technique: discretizing the time dimension and solving the problem as a standard MDP. To apply the MDP method, three additional inputs must be specified: an earliest starting time, latest finishing time, and bin width. Since this paper's focus is on exact computations, we chose a discretization level corresponding to the resolution necessary for exact solution by the MDP at its grid points. An advantage of the MDP is that it is by construction acyclic, so it can be solved by just one sweep of standard value iteration, working backwards in time. The TMDP'S advantage is that it directly manipulates entire linear segments of the time-value functions. 4.1 Transportation planning Figure 2 illustrates an example TMDP for optimizing a commute from San Francisco to NASA Ames. The 14 discrete states model both location and observed traffic C? xC Q) ::J (ij > (ij E li 0 a.; .0 E ::J c: c: o n ro Figure 2: The San Francisco to Ames commuting example 1.2 0.8 0.6 0.4 0.2 0 -0.2 o Q-functions at state 10 ("US1 01 & 8ayshore / heavy traffic") ------------------------..~ action 0 ("drive to Ames") action 1 ("drive to 8ayshore station") f-------,'------; 6 7 6 7 " 8 9 10 11 Optimal policy over time at state 10 action 0 ("drive to Ames") action 1 ("drive to 8ayshore station") 8 9 time 10 11 12 12 Figure 3: The optimal Q-value functions and policy at state #10. conditions: shaded and unshaded circles represent heavy and light traffic, respectively. Observed transition times and traffic conditions are stochastic, and depend on both the time and traffic conditions at the originating location. At states 5, 6, 11, and 12, the "catch the train" action induces an absolute arrival distribution reflecting the train schedules. The domain objective is to arrive at Ames by 9:00am. We impose a linear penalty for arriving between 9 and noon, and an infinite penalty for arriving after noon. There are also linear penalties on the number of minutes spent driving in light traffic, driving in heavy traffic, and riding on the train; the coefficients of these penalties can be adjusted to reflect the commuter's tastes. Figure 3 presents the optimal time-value functions and policy for state #10, "US101&Bayshore / heavy traffic." There are two actions from this state, corresponding to driving directly to Ames and driving to the train station to wait for the next train. Driving to the train station is preferred (has higher Q-value) at times that are close- but not too close!- to the departure times of the train. The full domain is solved in well under a second by both solvers (see Table 1). The optimal time-value functions in the solution comprise a total of 651 linear segments. 4.2 Telescope observation scheduling Next, we consider the problem of scheduling astronomical targets for a telescope to maximize the scientific return of one night's viewing [1]. We are given N possible targets with associated coordinates, scientific value, and time window of visibility. Of course, we can view only one target at a time. We assume that the reward of an observation is proportional to the duration of viewing the target. Acquiring a target requires two steps of stochastic duration: moving the telescope, taking time roughly proportional to the distance traveled; and calibrating it on the new target. Previous approaches have dealt with this stochasticity heuristically, using a just-incase scheduling approach [1]. Here, we model the stochasticity directly within the TMDP framework. The TMDP has N + 1 states (corresponding to the N observations and "off") and N actions per state (corresponding to what to observe next). The Domain Solver Model Value V* Runtime states sweeps pieces (secs) SF-Commute piecewise VI 14 13 651 0.2 exact grid VI 5054 1 5054 0.1 Telescope-IO piecewise VI 11 5 186 0.1 exact grid VI 14,311 1 14,311 1.3 Telescope-25 piecewise VI 26 6 716 1.8 exact grid VI 33,826 1 33,826 7.4 Telescope-50 piecewise VI 51 6 1252 6.3 exact grid VI 66,351 1 66,351 34.5 Telescope-100 piecewise VI 101 4 2711 17.9 exact grid VI 131,300 1 131,300 154.1 Table 1: Summary of results. The three rightmost columns measure solution complexity in terms of the number of sweeps of value iteration before convergence; the number of distinct "pieces" or values in the optimal value function V*; and the running time. Running times are the median of five runs on an UltraSparc II (296MHz CPU, 256Mb RAM). dawdling reward rate K(x, t) encodes the scientific value of observing x at time t; that value is 0 at times when x is not visible. Relative duration distributions encode the inter-target distances and stochastic calibration times on each transition. We generated random target lists of sizes N =10, 25, 50, and 100. Visibility windows were constrained to be within a 13-hour night, specified with O.Ol-hour precision. Thus, representing the exact solution with a grid required 1301 time bins per state. Table 1 shows comparative results of the piecewise-linear and grid-based solvers. 5 Conclusions In sum, we have presented a new stochastic model for time-dependent MDPS (TMDPS), discussed applications, and shown that dynamic programming with piecewise linear time-value functions can produce optimal policies efficiently. In initial comparisons with the alternative method of discretizing the time dimension, the TMDP approach was empirically faster, used significantly less memory, and solved the problem exactly over continuous t E lR rather than just at grid points. In our exact computation model, the requirement of discrete duration distributions seems particularly restrictive. We are currently investigating a way of using our exact algorithm to generate upper and lower bounds on the optimal solution for the case of arbitrary pdfs. This may allow the system to produce an optimal or provably near-optimal policy without having to identify all the twists and turns in the optimal time-value functions. Perhaps the most important advantage of the piecewise linear representation will turn out to be its amenability to bounding and approximation methods. We hope that such advances will enable the solution of city-sized route planning, more realistic telescope scheduling, and other practical time-dependent stochastic problems. Acknowledgments We thank Leslie Kaelbling, Rich Washington and NSF CAREER grant IRI-9702576. References [1] John Bresina, Mark Drummond, and Keith Swanson. Managing action duration uncertainty with just-in-case scheduling. In Decision- Theoretic Planning: Papers from the 1994 Spring AAAI Symposium, pages 19-26, Stanford, CA, 1994. AAAI Press, Menlo Park, California. ISBN 0-929280-70-9. URL http://ic-www.arc.nasa.gov/ic/projects/xfr/jic/jic.html. [2] Thomas H. Cormen, Charles E. Leiserson, and Ronald L. Rivest. Introduction to Algorithms. The MIT Press, Cambridge, MA, 1990. [3] Sven Koenig and Reid G. Simmons. How to make reactive planners risksensitive. In Proceedings of the 2nd International Conference on Artificial Intelligence Planning Systems, pages 293- 304, 1994. [4] Harold J. Kushner and Paul G. Dupuis. Numerical Methods for Stochastic Control Problems in Continuous Time. Springer-Verlag, New York, 1992. [5] Martin L. Puterman. Markov Decision Processes- Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., New York, NY, 1994. [6] Michael P. Wellman, Kenneth Larson, Matthew Ford, and Peter R. Wurman. Path planning under time-dependent uncertainty. In Proceedings of the 11th Conference on Uncertainty in Artificial Intelligence, pages 532- 539, 1995.
|
2000
|
11
|
1,765
|
Kernel-Based Reinforcement Learning in Average-Cost Problems: An Application to Optimal Portfolio Choice Dirk Ormoneit Department of Computer Science Stanford University Stanford, CA 94305-9010 ormoneit@cs.stanford.edu Abstract Peter Glynn EESOR Stanford University Stanford, CA 94305-4023 Many approaches to reinforcement learning combine neural networks or other parametric function approximators with a form of temporal-difference learning to estimate the value function of a Markov Decision Process. A significant disadvantage of those procedures is that the resulting learning algorithms are frequently unstable. In this work, we present a new, kernel-based approach to reinforcement learning which overcomes this difficulty and provably converges to a unique solution. By contrast to existing algorithms, our method can also be shown to be consistent in the sense that its costs converge to the optimal costs asymptotically. Our focus is on learning in an average-cost framework and on a practical application to the optimal portfolio choice problem. 1 Introduction Temporal-difference (TD) learning has been applied successfully to many real-world applications that can be formulated as discrete state Markov Decision Processes (MDPs) with unknown transition probabilities. If the state variables are continuous or high-dimensional, the TD learning rule is typically combined with some sort of function approximator - e.g. a linear combination of feature vectors or a neural network - which may well lead to numerical instabilities (see, for example, [BM95, TR96]). Specifically, the algorithm may fail to converge under several circumstances which, in the authors' opinion, is one of the main obstacles to a more wide-spread use of reinforcement learning (RL) in industrial applications. As a remedy, we adopt a non-parametric perspective on reinforcement learning in this work and we suggest a new algorithm that always converges to a unique solution in a finite number of steps. In detail, we assign value function estimates to the states in a sample trajectory and we update these estimates in an iterative procedure. The updates are based on local averaging using a so-called "weighting kernel". Besides numerical stability, a second crucial advantage of this algorithm is that additional training data always improve the quality of the approximation and eventually lead to optimal performance - that is, our algorithm is consistent in a statistical sense. To the authors' best knowledge, this is the first reinforcement learning algorithm for which consistency has been demonstrated in a continuous space framework. Specifically, the recently advocated "direct" policy search or perturbation methods can by construction at most be optimal in a local sense [SMSMOO, VRKOOj. Relevant earlier work on local averaging in the context of reinforcement learning includes [Rus97j and [Gor99j. While these papers pursue related ideas, their approaches differ fundamentally from ours in the assumption that the transition probabilities of the MDP are known and can be used for learning. By contrast, kernelbased reinforcement learning only relies on sample trajectories of the MDP and it is therefore much more widely applicable in practice. While our method addresses both discounted- and average-cost problems, we focus on average-costs here and refer the reader interested in discounted-costs to [OSOOj. For brevity, we also defer technical details and proofs to an accompanying paper [OGOOj. Note that averagecost reinforcement learning has been discussed by several authors (e.g. [TR99]). The remainder of this work is organized as follows. In Section 2 be provide basic definitions and we describe the kernel-based reinforcement learning algorithm. Section 3 focuses on the practical implementation of the algorithm and on theoretical issues. Sections 4 and 5 present our experimental results and conclusions. 2 Kernel-Based Reinforcement Learning Consider a MDP defined by a sequence of states X t taking values in IRd , a sequence of actions at taking values in A = {I, 2, ... , M}, and a family of transition kernels {Pa(x, B)la E A} characterizing the conditional probability of the event X t E B given X t- 1 = x and at-l = a. The cost function c(x, a) represents an immediate penalty for applying action a in state x. Strategies, policies, or controls are understood as mappings of the form J1. : IRd -+ A, and we let PX,/A denote the probability distribution governing the Markov chain starting from Xo = x associated with the policy J1.. Several regularity conditions are listed in detail in [OGOOj. Our goal is to identify policies that are optimal in that they minimize the long-run average-cost TJ/A == liIllT-too Ex,/A [f 'L,;=-Ol c(Xt, J1.(Xt})]. An optimal policy, J1.*, can be characterized as a solution to the Average-Cost Optimality Equation (ACOE): TJ* + h*(x) J1.*(x) min{c(x, a) + (rah*)(x)}, a argmin{c(x, a) + (rah*)(x)} , a (1) (2) where TJ* is the minimum average-cost and h*(x) has an interpretation as the differential value of starting in x as opposed to drawing a random starting position from the stationary distribution under J1.*. r a denotes the conditional expectation operator (r ah)(X) == Ex,a [h(Xl) ], which is assumed to be unknown so that (1) cannot be solved explicitly. Instead, we simulate the MDP using a fixed proposal strategy jl in reinforcement learning to generate a sample trajectory as training data. Formally, let S == {zo, .. . , Zm} denote such an m-step sample trajectory and let A == {ao, ... ,am-llas = p,(zs)} and C == {c(zs , as)IO ~ s < m} be the sequences of actions and costs associated with S. Then our objective can be reformulated as the approximation of fJ* based on information in S, A, and C. In detail, we will construct an approximate expectation operator, l' m,a, based on the training data, S, and use this approximation in place of the true operator rain this work. Formally substituting 1'm,a for rain (1) and (2) gives the Approximate Avemge-Cost Optimality Equation (AACOE): i)m + hm(x) flm(x) argmjn {c(x , a) + (1' m,ahm)(X)} . (3) (4) Note that, ifthe solutions i)m and hm to (3) are well-defined, they can be interpreted as statistical estimates of TJ* and h* in equation (1). However, i)m and hm need not exist unless 1'm,a is defined appropriately. We therefore employ local averaging in this work to construct 1'm,a in a way that guarantees the existence of a unique fixed point of (3). For the derivation of the local averaging operator, note that the task of approximating (rah)(x) = Ex,a[h(Xdl can be interpreted alternatively as a regression of the "target" variable h(Xd onto the "input" Xo = x . So-called kernel-smoothers address regression tasks of this sort by locally averaging the target values in a small neighborhood of x . This gives the following approximation: m-l L km,a(zs, x)h(zs+1)' (5) s=o (6) In detail, we employ the weighting function or weighting kernel km ,a (zs, x) in (6) to determine the weights that are used for averaging in equation (5). Here km,a(zs , x) is a multivariate Gaussian, normalized so as to satisfy the constraints km, .. (zs, x) > 0 if as = a, km,a(zs , x) = 0 if as i- a, and I:::,,=~l km, .. (zs, x) = 1. Intuitively, (5) assesses the future differential cost of applying action a in state x by looking at all times in the training data where a has been applied previously in a state similar to x , and by averaging the current differential value estimates at the outcomes of these previous transitions. Because the weights km , .. (zs , x) are related inversely to the distance Ilzs - xii, transitions originating in the neighborhood of x are most influential in this averaging procedure. A more statistical interpretation of (5) would suggest that ideally we could simply generate a large number of independent samples from the conditional distribution Px,a and estimate Ex ,a[h(X1)l using Monte-Carlo approximation. Practically speaking, this approach is clearly infeasible because in order to assess the value of the simulated successor states we would need to sample recursively, thereby incurring exponentially increasing computational complexity. A more realistic alternative is to estimate l' m,a h (x) as a local average of the rewards that were generated in previous transitions originating in the neighborhood of x, where the membership of an observation Zs in the neighborhood of x is quantified using km,a( zs, x). Here the regularization parameter b determines the width of the Gaussian kernel and thereby also the size of the neighborhood used for averaging. Depending on the application, it may be advisable to choose b either fixed or as a location-dependent function of the training data. 3 "Self-Approximating Property" As we illustrated above, kernel-based reinforcement learning formally amounts to substituting the approximate expectation operator r m,a for r a and then applying dynamic programming to derive solutions to the approximate optimality equation (3). In this section, we outline a practical implementation of this approach and we present some of our theoretical results. In particular, we consider the relative value iteration algorithm for average-cost MDPs that is described, for example, in [Ber95]. This procedure iterates a variant of equation (1) to generate a sequence of value function estimates, h~ , that eventually converge to a solution of (1) (or (3), respectively). An important practical problem in continuous state MDPs is that the intermediate functions h~ need to be represented explicitly on a computer. This requires some form of function approximation which may be numerically undesirable and computationally burdensome in practice. In the case of kernel-based reinforcement learning, the so-called "self-approximating" property allows for a much more efficient implementation in vector format (see also [Rus97]). Specifically, because our definition of r m,ah in (5) only depends on the values of h at the states in S, the AACOE (3) can be solved in two steps: (7) (8) In other words, we first determine the values of hm at the points in S using (7) and then compute the values at new locations x in a second step using (8). Note that (7) is a finite equation system by contrast to (3). By introducing the vectors and matrices n?,(i) == hm,?,(zi ), c?,(i) == C?,(Zi), q>?,(i,j) == km ,?, (Zj,Zi ) for i = 1, . .. , m and j = 1, ... , m , the relative value iteration algorithm can thus be written conveniently as (for details, see [Ber95, OGOO]): ~k+1 ._ ~k ~k () U .- U n ew -Itnew 1 . (9) Hence we end up with an algorithm that is analogous to value iteration except that we use the weighting matrix q>a in place ofthe usual transition probabilities and nk and Ca are vectors of points in the training set S as opposed to vectors of states. Intuitively, (9) assigns value estimates to the states in the sample trajectory and updates these estimates in an iterative fashion. Here the update of each state is based on a local average over the costs and values of the samples in its neighborhood. Since q>a (i, j) > 0 and 2::7=1 q>a(i, j) = 1 we can further exploit the analogy between (9) and the usual value iteration in an "artificial" MDP with transition probabilities q>a to prove the following theorem: Theorem 1 The relative value iteration (9) converges to a unique fixed point. For details, the reader is referred to [OSOO, OGOO]. Note that Theorem 1 illustrates a rather unique property of kernel-based reinforcement learning by comparison to alternative approaches. In addition, we can show that - under suitable regularity conditions - kernel-based reinforcement learning is consistent in the following sense: Theorem 2 The approximate optimal cost Tfm converges to the true optimal cost TI* in the sense that E 1 A * 1 m-t co 0 xo ,ji. Tim TI ---+ . Also, the true cost of the approximate strategy Pm converges to the optimal cost: Hence Pm performs as well as fJ* asymptotically and we can also predict the optimal cost TJ* using r,m. From a practical standpoint, Theorem 2 asserts that the performance of approximate dynamic programming can be improved by increasing the amount of training data. Note, however, that the computational complexity of approximate dynamic programming depends on the sample size m. In detail, the complexity of a single application of (9) is O(m2) in a naive implementation and O(mlog m) in a more elaborate nearest neighbor approach. This complexity issue prevents the use of very large data sets using the "exact" algorithm described above. As in the case of parametric reinforcement learning, we can of course restrict ourselves to a fixed amount of computational resources simply by discarding observations from the training data or by summarizing clusters of data using "sufficient statistics". Note that the convergence property in Theorem 1 remains unaffected by such an approximation. 4 Optimal Portfolio Choice In this section, we describe the practical application of kernel-based reinforcement learning to an investment problem where an agent in a financial market decides whether to buy or sell stocks depending on the market situation. In the finance and economics literature, this task is known as "optimal portfolio choice" and has created an enormous literature over the past decades. Formally, let St symbolize the value of the stock at time t and let the investor choose her portfolio at from the set A == {O, 0.1, 0.2, ... , I}, corresponding to the relative amount of wealth invested in stocks as opposed to an alternative riskless asset. At time t + 1, the stock price changes from St to St+1, and the portfolio of the investor participates in the price movement depending on her investment choice. Formally, if her wealth at time t is Wt, it becomes Wt+1 = (1 + at St ±~: S, ) Wt at time t + 1. To render this simulation as realistic as possible, our investor is assumed to be risk-averse in that her fear of losses dominates her appreciation of gains of equal magnitude. A standard way to express these preferences formally is to aim at maximizing the expectation of a concave "utility function", U(z), ofthe final wealth WT. Using the choice U(z) = log(z), the investor's utility can be written as U(WT) = 2:,;:01 log (1 + at S'±~:S') . Hence utilities are additive over time, and the objective of maximizing E[U(WT)] can be stated in an average-cost framework where c(x, a) = Ex,a [log (1 + a S'±~:S' )]. We present results using simulated and real stock prices. With regard to the simulated data, we adopt the common assumption in finance literature that stock prices are driven by an Ito process with stochastic, mean-reverting volatility: dSt fJStdt + ylv;StdBt, dVt ¢(fJ - vt)dt + pylv;dBt. Here Vt is the time-varying volatility, and Bt and Bt are independent Brownian motions. The parameters of the model are fJ = 1.03, fJ = 0.3, ¢ = 10.0, and p = 5.0. We simulated daily data for the period of 13 years using the usual Euler approximation of these equations. The resulting stock prices, volatilities, and returns are shown in Figure l. Next, we grouped the simulated time series into 10 sets of training and Figure 1: The simulated time-series of stock prices (left), volatility (middle) , and daily returns (right; Tt == log(St/St-d) over a period of one year. test data such that the last 10 years are used as 10 test sets and the three years preceding each test year are used as training data. Table 1 reports the training and test performances on each of these experiments using kernel-based reinforcement learning and a benchmark buy & hold strategy. Performance is measured using Year II Reinforcement Learning Buy &: Hold II Training Test I Training Test 4 0.129753 0.096555 0.058819 0.052533 5 0.125742 0. 107905 0.043107 0.081395 6 0.100265 -0.074588 0.053755 -0.064981 7 0.059405 0.201186 0.018023 0.172968 8 0.082622 0.227161 0.041410 0.197319 9 0.077856 0.098172 0.074632 0.092312 10 0.136525 0.199804 0.137416 0.194993 11 0.145992 0.121507 0.147065 0.118656 12 0.126052 -0.018110 0.125978 -0.017869 13 0.127900 -0.022748 0.077196 -0.029886 Table 1: Investment performance on the simulated data (initial wealth Wa = 100). the Sharpe-ratio which is a standard measure of risk-adjusted investment performance. In detail, the Sharpe-ratio is defined as SR = log(WT/Wo)/iT where iT is the standard deviation of log(Wt!Wt- 1) over time. Note that large values indicate good risk-adjusted performance in years of positive growth, whereas negative values cannot readily be interpreted. We used the root of the volatility (standardized to zero mean and unit variance) as input information and determined a suitable choice for the bandwidth parameter (b = 1) experimentally. Our results in Table 1 demonstrate that reinforcement learning dominates buy & hold in eight out of ten years on the training set and in all seven years with positive growth on the test set. Table 2 shows the results of an experiment where we replaced the artificial time series with eight years of daily German stock index data (DAX index, 1993-2000). We used the years 1996-2000 as test data and the three years preceding each test year for training. As the model input, we computed an approximation of the (root) volatility using a geometric average of historical returns. Note that the training performance of reinforcement learning always dominates the buy & hold strategy, and the test results are also superior to the benchmark except in the year 2000. Year Reinforcement Learning Buy &; Hold Training Test Training Test 1996 0.083925 0.173373 0.038818 0.120107 1997 0.119875 0.121583 0.119875 0.096369 1998 0.123927 0.079584 0.096183 0.035204 1999 0.141242 0.094807 0.035137 0.090541 2000 0.085236 -0.007878 0.081271 0.148203 Table 2: Investment performance on the DAX data. 5 Conclusions We presented a new, kernel-based reinforcement learning method that overcomes several important shortcomings of temporal-difference learning in continuous-state domains. In particular, we demonstrated that the new approach always converges to a unique approximation of the optimal policy and that the quality of this approximation improves with the amount of training data. Also, we described a financial application where our method consistently outperformed a benchmark model in an artificial and a real market scenario. While the optimal portfolio choice problem is relatively simple, it provides an impressive proof of concept by demonstrating the practical feasibility of our method. Efficient implementations of local averaging for large-scale problems have been discussed in the data mining community. Our work makes these methods applicable to reinforcement learning, which should be valuable to meet the real-time and dimensionality constraints of real-world problems. Acknowledgements. The work of Dirk Ormoneit was partly supported by the Deutsche Forschungsgemeinschaft. Saunak Sen helped with valuable discussions and suggestions. References [Ber95) [BM95) [Gor99) [OGOO) [OSOO) [Rus97) D. P. Bertsekas. Dynamic Programming and Optimal Control, volume 1 and 2. Athena Scientific, 1995. J. A. Boyan and A. W. Moore. Generalization in reinforcement learning: Safely approximating the value function. In NIPS 7,1995. G. Gordon. Approximate Solutions to Markov Decision Processes. PhD thesis, Computer Science Department, Carnegie Mellon University, 1999. D. Ormoneit and P. Glynn. Kernel-based reinforcement learning in averagecost problems. Working paper, Stanford University. In preparation. D. Ormoneit and S. Sen. Kernel-based reinforcement learning. Machine Learning, 2001. To appear. J. Rust. Using randomization to break the curse of dimensionality. Economet"ica, 65(3):487- 516, 1997. [SMSMOO) R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In NIPS 12,2000. [TR96) J. N. TsitsikIis and B. Van Roy. Feature-based methods for large-scale dynamic programming. Machine Learning, 22:59-94, 1996. [TR99) J. N. TsitsikIis and B. Van Roy. Average cost temporal-difference learning. Automatica, 35(11):1799- 1808, 1999. [VRKOO) J. N. Tsitsiklis V. R. Konda. Actor-critic algorithms. In NIPS 12,2000.
|
2000
|
110
|
1,766
|
A New Approximate Maximal Margin Classification Algorithm Claudio Gentile DSI, Universita' di Milano, Via Comelico 39, 20135 Milano, Italy gentile@dsi.unimi.it Abstract A new incremental learning algorithm is described which approximates the maximal margin hyperplane w.r.t. norm p ~ 2 for a set of linearly separable data. Our algorithm, called ALMAp (Approximate Large Margin algorithm w.r.t. norm p), takes 0 ((P~21;;2) corrections to separate the data with p-norm margin larger than (1 - 0:) ,,(, where,,( is the p-norm margin of the data and X is a bound on the p-norm of the instances. ALMAp avoids quadratic (or higher-order) programming methods. It is very easy to implement and is as fast as on-line algorithms, such as Rosenblatt's perceptron. We report on some experiments comparing ALMAp to two incremental algorithms: Perceptron and Li and Long's ROMMA. Our algorithm seems to perform quite better than both. The accuracy levels achieved by ALMAp are slightly inferior to those obtained by Support vector Machines (SVMs). On the other hand, ALMAp is quite faster and easier to implement than standard SVMs training algorithms. 1 Introduction A great deal of effort has been devoted in recent years to the study of maximal margin classifiers. This interest is largely due to their remarkable generalization ability. In this paper we focus on special maximal margin classifiers, i.e., on maximal margin hyperplanes. Briefly, given a set linearly separable data, the maximal margin hyperplane classifies all the data correctly and maximizes the minimal distance between the data and the hyperplane. If euclidean norm is used to measure the distance then computing the maximal margin hyperplane corresponds to the, by now classical, Support Vector Machines (SVMs) training problem [3]. This task is naturally formulated as a quadratic programming problem. If an arbitrary norm p is used then such a task turns to a more general mathematical programming problem (see, e.g., [15, 16]) to be solved by general purpose (and computationally intensive) optimization methods. This more general task arises in feature selection problems when the target to be learned is sparse. A major theme of this paper is to devise simple and efficient algorithms to solve the maximal margin hyperplane problem. The paper has two main contributions. The first contribution is a new efficient algorithm which approximates the maximal margin hyperplane w.r.t. norm p to any given accuracy. We call this algorithm ALMAp (Approximate Large Margin algorithm w.r.t. norm p). ALMAp is naturally viewed as an on-line algorithm, i.e., as an algorithm which processes the examples one at a time. A distinguishing feature of ALMAp is that its relevant parameters (such as the learning rate) are dynamically adjusted over time. In this sense, ALMAp is a refinement of the on-line algorithms recently introduced in [2]. Moreover, ALMA2 (i.e., ALMAp with p = 2) is a perceptron-like algorithm; the operations it performs can be expressed as dot products, so that we can replace them by kernel functions evaluations. ALMA2 approximately solves the SVMs training problem, avoiding quadratic programming. As far as theoretical performance is concerned, ALMA2 achieves essentially the same bound on the number of corrections as the one obtained by a version of Li and Long's ROMMA algorithm [12], though the two algorithms are different. l In the case that p is logarithmic in the dimension of the instance space (as in [6]) ALMAp yields results which are similar to those obtained by estimators based on linear programming (see [1, Chapter 14]). The second contribution of this paper is an experimental investigation of ALMA2 on the problem of handwritten digit recognition. For the sake of comparison, we followed the experimental setting described in [3,4,12]. We ran ALMA2 with polynomial kernels, using both the last and the voted hypotheses (as in [4]), and we compared our results to those described in [3, 4, 12]. We found that voted ALMA2 generalizes quite better than both ROMMA and the voted Perceptron algorithm, but slightly worse than standard SVMs. On the other hand ALMA2 is much faster and easier to implement than standard SVMs training algorithms. For related work on SVMs (with p = 2), see Friess et al. [5], Platt [17] and references therein. The next section defines our major notation and recalls some basic preliminaries. In Section 3 we describe ALMAp and claim its theoretical properties. Section 4 describes our experimental comparison. Concluding remarks are given in the last section. 2 Preliminaries and notation An example is a pair (x, y), where x is an instance belonging to n nand y E {-1, + 1 } is the binary label associated with x. A weight vector w = (WI, ... , wn) E nn represents an n-dimensional hyperplane passing through the origin. We associate with w a linear threshold classifier with threshold zero: w : x -t sign( w . x) = 1 if w . x ~ 0 and = -1 otherwise. When p ~ 1 we denote by Ilwllp the p-norm of w, i.e., Ilwllp = (E~=llwiIP)llp (also, Ilwlloo = limp-+oo (E~=llwiIP)llp = maxi IWil). We say that q is dual to p if ~ + ~ = 1 holds. For instance, the I-norm is dual to the oo-norm and the 2-norm is self-dual. In this paper we assume that p and q are some pair of dual values, with p ~ 2. We use p-norms for instances and q-norms for weight vectors. The (normalized) p-norm margin (or just the margin, if p is clear from the surrounding context) of a hyperplane w with Ilwll q :s; 1 on example (x,y) is defined as ~I~I'I~' If this margin is positive2 then w classifies (x, y) correctly. Notice that from Holder's inequality we have yw· x:S; Iw, xl :s; Ilwll q Ilxllp :s; Ilxllp. Hence ~I~I 'I~ E [-1,1]. Our goal is to approximate the maximal p-norm margin hyperplane for a set of examples (the training set). For this purpose, we use terminology and analytical tools from the on-line lIn fact, algorithms such as ROMMA and the one contained in Kowalczyk [10] have been specifically designed for euclidean norm. Any straightforward extension of these algorithms to a general norm p seems to require numerical methods. 2We assume that w . x = 0 yields a wrong classification, independent of y. learning literature. We focus on an on-line learning model introduced by Littlestone [14]. An on-line learning algorithm processes the examples one at a time in trials. In each trial, the algorithm observes an instance x and is required to predict the label y associated with x . We denote the prediction by f). The prediction f) combines the current instance x with the current internal state of the algorithm. In our case this state is essentially a weight vector w, representing the algorithm's current hypothesis about the maximal margin hyperplane. After the prediction is made, the true value of y is revealed and the algorithm suffers a loss, measuring the "distance" between the prediction f) and the label y. Then the algorithm updates its internal state. In this paper the prediction f) can be seen as the linear function f) = w . x and the loss is a margin-based 0-1 Loss: the loss of W on example (x, y) is 1 if ~I~I 'I~ ~ (1 - a) "y and o otherwise, for suitably chosen a, "y E [0,1]. Therefore, if Ilwll q ~ 1 then the algorithm incurs positive loss if and only if w classifies (x, y) with (p-norm) margin not larger than (1- a) "y. The on-line algorithms are typically loss driven, i.e., they do update their internal state only in those trials where they suffer a positive loss. We call a correction a trial where this occurs. In the special case when a = 1 a correction is a mistaken trial and a loss driven algorithm turns to a mistake driven [14] algorithm. Throughout the paper we use the subscript t for x and y to denote the instance and the label processed in trial t. We use the subscript k for those variables, such as the algorithm's weight vectorw, which are updated only within a correction. In particular, Wk denotes the algorithm's weight vector after k-l corrections (so that WI is the initial weight vector). The goal of the on-line algorithm is to bound the cumulative loss (i.e., the total number of corrections or mistakes) it suffers on an arbitrary sequence of examples S = (Xl, yd, ... , (XT, YT). If S is linearly separable with margin "y and we pick a < 1 then a bounded loss clearly implies convergence in a finite number of steps to (an approximation of) the maximal margin hyperplane for S. 3 The approximate large margin algorithm ALMAp ALMAp is a large margin variant of the p-norm Perceptron algorithm 3 [8, 6], and is similar in spirit to the variable learning rate algorithms introduced in [2]. We analyze ALMAp by giving upper bounds on the number of corrections. The main theoretical result of this paper is Theorem 1 below. This theorem has two parts. Part 1 bounds the number of corrections in the linearly separable case only. In the special case when p = 2 this bound is very similar to the one proven by Li and Long for a version of ROMMA [12]. Part 2 holds for an arbitrary sequence of examples. A bound which is very close to the one proven in [8, 6] for the (constant learning rate) p-norm Perceptron algorithm is obtained as a special case. Despite this theoretical similarity, the experiments we report in Section 4 show that using our margin-sensitive variable learning rate algorithm yields a clear increase in performance. In order to define our algorithm, we need to recall the following mapping f from [6] (a p-indexing for f is understood): f : nn -t nn, f = (h, ... , fn), where h(w) = sign(wi) IWilq-1 / Ilwllr2, w = (WI, ... , Wn) E nn. Observe that p = 2 yields the identify function. The (unique) inverse f-1 of f is [6] f-1 : nn -t nn, f-1 = U11, ... , f;;l), where f i- 1((}) = sign(Bi) IBilp-1 / 11(}11~-2, () = (B1, ... ,Bn) E nn, namely, f-1 is obtained from f by replacing q with p. 3The p-norm Perceptron algorithm is a generalization of the classical Perceptron algorithm [18]: p-norm Perceptron is actually Perceptron when p = 2. Algorithm ALMAp(aj B, C) with a E (0,1], B, C > O. Initialization: Initial weight vector WI = 0; k = 1. Fort = 1, ... ,Tdo: Get example (Xt, Yt) E nn x {-I, +1} and update weights as follows: Set 'Yk = B JP=l ~j If Yt Wk·Xt <_ (1 - a) ""k then: Ilxtllp I '11 C I 'Ik v'P=I IIXtllp Vii' W~ = f-l(f(Wk) + 'T/k Yt Xt), Wk+1 = w~/max{l, Ilw~llq}, k+-k+1. Figure 1: The approximate large margin algorithm ALMAp . ALMAp is described in Figure 1. The algorithm is parameterized by a E (0,1], B > 0 and C > O. The parameter a measures the degree of approximation to the optimal margin hyperplane, while Band C might be considered as tuning parameters. Their use will be made clear in Theorem 1 below. Let W = {w E nn : Ilwllq :::; I}. ALMAp maintains a vector Wk of n weights in W. It starts from WI = O. At time t the algorithm processes the example (Xt, Yt). If the current weight vector Wk classifies (Xt, Yt) with margin not larger than (1 - a) 'Yk then a correction occurs. The update rule4 has two main steps. The first step gives w~ through the classical update of a (p-norm) perceptron-like algorithm (notice, however, that the learning rate 'T/k scales with k, the number of corrections occurred so far). The second step gives Wk+1 by projecting w~ onto W : Wk+1 = wklllw~llq if Ilw~llq > 1 and Wk+1 = w~ otherwise. The projection step makes the new weight vector Wk+1 belong to W. The following theorem, whose proof is omitted due to space limitations, has two parts. In part 1 we treat the separable case. Here we claim that a special choice of the parameters Band C gives rise to an algorithm which approximates the maximal margin hyperplane to any given accurary a. In part 2 we claim that if a suitable relationship between the parameters Band C is satisfied then a bound on the number of corrections can be proven in the general (nonseparable) case. The bound of part 2 is in terms of the margin-based quantity V-y(Uj (x,y)) = max{O,'Y rl~i~}' 'Y > O. (Here a p-indexing for V-y is understood). V-y is called deviation in [4] and linear hinge loss in [7]. Notice that Band C in part 1 do not meet the requirements given in part 2. On the other hand, in the separable case Band C chosen in part 2 do not yield a hyperplane which is arbitrarily (up to a small a) close to the maximal margin hyperplane. Theorem 1 Let W = {w E nn : Ilwllq :::; I}, S = ((xI,yd, .'" (XT,YT)) E (nn x { -1, + 1 } ) T, and M be the set of corrections of ALM Ap( aj B, C) running on S (i. e., the set of trials t such that Ytll~t\l~t :::; (1 - a hk). 1. Let 'Y* = maxWEW mint=I, ... ,T Y{I~itt > O. Then ALMAp(aj v'8/a, y'2) achieves thefollowing bouncP on IMI: 2 (p - 1) (2 ) 2 8 ( p - 1 ) IMI:::; (,,(*)2 ~ -1 + ~ - 4 = 0 a2 (,,(*)2 . 41n the degenerate case that Xt = 0 no update takes place. 5We did not optimize the constants here. (1) Furthermore, throughout the run of ALMAp(a; VS/a, v'2) we have 'Yk ~ 'Y*. Hence (1) is also an upper bound on the number of trials t such that Vil~,\I~' :-::; (1 - a) 'Y*. 2. Let the parameters Band C in Figure 1 satisfy the equation6 C2 + 2 (1- a) B C = l. Then for any u E W, ALMAp( a; B, C) achieves the following bound on 1M I, holding for any'Y> 0, where p2 = J2~2: Observe that when a = 1 the above inequality turns to a bound on the number of mistaken trials. In such a case the value of 'Yk (in particular; the value of B) is immaterial, while C is forced to be 1. 0 When p = 2 the computations performed by ALMAp essentially involve only dot products (recall that p = 2 yields q = 2 and [ = [-1 = identity). Thus the generalization of ALMA2 to the kernel case is quite standard. In fact, the linear combination W k+1 . X can be computed recursively, since Wk+1 . x = W k ·Xt'1 k V,X,.X. Here the denominator Nk+1 k+l equals max{1, Ilw~112} and the norm Ilw~J12 is again computed recursively by Ilw~ll~ = Ilw~_lIIVN~ + 2'T/k YtWk' Xt + 'T/~ Ilxtl12' where the dot product Wk' Xt is taken from the k-th correction (the trial where the k-th weight update did occur). 4 Experimental results We did some experiments running ALMA2 on the well-known MNIST OCR database.? Each example in this database consists of a 28x28 matrix representing a digitalized image of a handwritten digit, along with a {0,1, ... ,9}-valued label. Each entry in this matrix is a value in {O, 1, ... ,255}, representing a grey level. The database has 60000 training examples and 10000 test examples. The best accuracy results for this dataset are those obtained by LeCun et al. [11] through boosting on top of the neural net LeNet4. They reported a test error rate of 0.7%. A soft margin SVM achieved an error rate of 1.1 % [3]. In our experiments we used ALMA2(a; -i-, v'2) with different values of a. In the following ALMA2(a) is shorthand for ALMA2(a; -i-, v'2). We compared to SVMs, the Perceptron algorithm and the Perceptron-like algorithm ROMMA [12]. We followed closely the experimental setting described in [3, 4, 12]. We used a polynomial kernel K of the form K(x, y) = (1 + x . y)d, with d = 4. (This choice was best in [4] and was also made in [3, 12].) However, we did not investigate any careful tuning of scaling factors. In particular, we did not determine the best instance scaling factor s for our algorithm (this corresponds to using the kernel K (x, y) = (1 + x . y / S )d). In our experiments we set s = 255. This was actually the best choice in [12] for the Perceptron algorithm. We reduced the lO-class problem to 10 binary problems. Classification is made according to the maximum output of the 10 binary classifiers. The results are summarized in Table 1. As in [4], the output of a binary classifier is based on either the last hypothesis produced by the algorithms (denoted by "last" in Table 1) or Helmbold and Warmuth's [9] leave-one-out voted hypothesis (denoted by "voted"). We refer the reader to [4] for details. We trained the algorithms by cycling up to 3 times ("epochs") over the training set. All the results shown in Table 1 are averaged over 10 random permutations of the training sequence. The columns marked 6Notice that Band C in part 1 do not satisfy this relationship. 7 Available on Y. LeCun's home page: http://www.research.att.com/ ... yann/ocr/mnisti. "Corr's" give the total number of corrections made in the training phase for the 10 labels. The first three rows of Table 1 are taken from [4, 12, 13]. The first two rows refer to the Perceptron algorithm,8 while the third one refers to the best 9 noise-controlled (NC) version of ROMMA, called "aggressive ROMMA". Our own experimental results are given in the last six rows. Among these Perceptron-like algorithms, ALMA2 "voted" seems to be the most accurate. The standard deviations about our averages are reasonably small. Those concerning test errors range in (0.03%,0.09%). These results also show how accuracy and running time (as well as sparsity) can be traded-off against each other in a transparent way. The accuracy of our algorithm is slightly worse than SVMs'. On the other hand, our algorithm is quite faster and easier to implement than previous implementations of SVMs, such as those given in [17,5]. An interesting features of ALMA2 is that its approximate solution relies on fewer support vectors than the SVM solution. We found the accuracy of 1.77 for ALMA2(1.0) fairly remarkable, considering that it has been obtained by sweeping through the examples just once for each of the ten classes. In fact, the algorithm is rather fast: training for one epoch the ten binary classifiers of ALMA2(1.0) takes on average 2.3 hours and the corresponding testing time is on average about 40 minutes. (All our experiments have been performed on a PC with a single Pentium® III MMX processor running at 447 Mhz.) 5 Concluding Remarks In the full paper we will give more extensive experimental results for ALMA2 and ALMAp with p > 2. One drawback of ALMAp'S approximate solution is the absence of a bias term (i.e., a nonzero threshold). This seems to make little difference for MNIST dataset, but there are cases when a biased maximal margin hyperplane generalizes quite better than an unbiased one. It is not clear to us how to incorporate the SVMs' bias term in our algorithm. We leave this as an open problem. Table 1: Experimental results on MNIST database. "TestErr" denotes the fraction of misclassified patterns in the test set, while "Corr's" gives the total number of training corrections for the 10 labels. Recall that voting takes place during the testing phase. Thus the number of corrections of "last" is the same as the number of corrections of "voted". 1 Epoch 2 Epochs 3 Epochs TestErr Corr's TestErr Corr's TestErr Corr's Perceptron "last" 2.71% 7901 2.14% 10421 2.03% 11787 "voted" 2.23% 7901 1.86% 10421 1.76% 11787 agg-ROMMA(NC) ("last") 2.05% 30088 1.76% 44495 1.67% 58583 ALMA2(1.0) "last" 2.52% 7454 2.01% 9658 1.86% 10934 "voted" 1.77% 7454 1.52% 9658 1.47% 10934 ALMA2(0.9) "last" 2.10% 9911 1.74% 12711 1.64% 14244 "voted" 1.69% 9911 1.49% 12711 1.40% 14244 ALMA2(0.8) "last" 1.98% 12810 1.72% 16464 1.60% 18528 "voted" 1.68% 12810 1.44% 16464 1.35% 18528 8These results have been obtained with no noise control. It is not clear to us how to incorporate any noise control mechanism into the classical Perceptron algorithm. The method employed in [10, 12] does not seem helpful in this case, at least for the first epoch. 9 According to [12], ROMMA's last hypothesis seems to perform better than ROMMA's voted hypothesis. Acknowledgments Thanks to Nicolo Cesa-Bianchi, Nigel Duffy, Dave Helmbold, Adam Kowalczyk, Yi Li, Nick Littlestone and Dale Schuurmans for valuable conversations and email exchange. We would also like to thank the NIPS2000 anonymous reviewers for their useful comments and suggestions. The author is supported by a post-doctoral fellowship from Universita degli Studi di Milano. References [1] M. Anthony, P. Bartlett, Neural Network Learning: Theoretical Foundations, CMU, 1999. [2] P. Auer and C. Gentile Adaptive and self-confident on-line learning algorithms. In 13th COLT, 107- 117, 2000. [3] C. Cortes, V. Vapnik. Support-vector networks. Machine Learning, 20, 3: 273- 297, 1995. [4] Y. Freund and R. Schapire. Large margin classification using the perceptron algorithm. Journal of Machine Learning, 37, 3: 277- 296, 1999. [5] T.-T. Friess, N. Cristianini, and C. Campbell. The kernel adatron algorithm: a fast and simple leaming procedure for support vector machines. In 15th ICML, 1998. [6] C. Gentile and N. Littlestone. The robustness of the p-norm algorithms. In 12th COLT, 1- 11, 1999. [7] c. Gentile, and M. K. Warmuth. Linear hinge loss and average margin. In 11th NIPS, 225- 231, 1999. [8] A. I . Grove, N. Littlestone, and D. Schuurmans. General convergence results for linear discriminant updates. In 10th COLT, 171- 183, 1997. [9] D. Helmbold and M. K. Warmuth. On weak learning. JCSS, 50, 3: 551- 573, 1995. [10] A. Kowalczyk. Maximal margin perceptron. In Smola, Bartlett, Scholkopf, and Schuurmans editors, Advances in large margin classifiers, MIT Press, 1999. [11] Y. Le Cun, L.I. Iackel, L. Bottou, A. Brunot, C. Cortes, I.S. Denker, H. Drucker, I. Guyon, U. Muller, S. Sackinger, P. Simard, and V. Vapnik, Comparison of learning algorithms for handwritten digit recognition. In ICANN, 53-60, 1995. [12] Y. Li, and P. Long. The relaxed online maximum margin algorithm. In 12th NIPS, 498- 504, 2000. [13] Y. Li. From support vector machines to large margin classifiers, PhD Thesis, School of Computing, the National University of Singapore, 2000. [14] N. Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2:285- 318, 1988. [15] O. Mangasarian, Mathematical programming in data mining. Data Mining and Knowledge Discovery,42, 1: 183- 201, 1997. [16] P. Nachbar, I.A. Nossek, I. Strobl, The generalized adatron algorithm. In Proc. 1993 IEEE ISCAS, 2152-5, 1993. [17] I . C. Platt. Fast training of support vector machines using sequential minimal optimization. In Scholkopf, Burges and Smola editors, Advances in kernel methods: support vector machines, MIT Press, 1998. [18] F. Rosenblatt. Principles of neurodynamics: Perceptrons and the theory of brain mechanisms. Spartan Books, Washington, D.C., 1962.
|
2000
|
111
|
1,767
|
Ensemble Learning and Linear Response Theory for leA Pedro A.d.F.R. Hfljen-Sflrensenl , Ole Winther2 , Lars Kai Hansenl 1 Department of Mathematical Modelling, Technical University of Denmark B321 DK-2800 Lyngby, Denmark, phs , l kha n sen@imrn. dtu. dk 2Theoretical Physics, Lund University, SOlvegatan 14 A S-223 62 Lund, Sweden, winther@nimi s .thep.lu. se Abstract We propose a general Bayesian framework for performing independent component analysis (leA) which relies on ensemble learning and linear response theory known from statistical physics. We apply it to both discrete and continuous sources. For the continuous source the underdetermined (overcomplete) case is studied. The naive mean-field approach fails in this case whereas linear response theory-which gives an improved estimate of covariances-is very efficient. The examples given are for sources without temporal correlations. However, this derivation can easily be extended to treat temporal correlations. Finally, the framework offers a simple way of generating new leA algorithms without needing to define the prior distribution of the sources explicitly. 1 Introduction Reconstruction of statistically independent source signals from linear mixtures is an active research field. For historical background and early references see e.g. [I]. The source separation problem has a Bayesian formulation, see e.g., [2, 3] for which there has been some recent progress based on ensemble learning [4]. In the Bayesian framework, the covariances of the sources are needed in order to estimate the mixing matrix and the noise level. Unfortunately, ensemble learning using factorized trial distributions only treats self-interactions correctly and trivially predicts: (SiSi)(Si}(Sj) = 0 for i -I j. This naive mean-field (NMF) approximation first introduced in the neural computing context by Ref. [5] for Boltzmann machine learning may completely fail in some cases [6]. Recently, Kappen and Rodriguez [6] introduced an efficient learning algorithm for Boltzmann Machines based on linear response (LR) theory. LR theory gives a recipe for computing an improved approximation to the covariances directly from the solution to the NMF equations [7]. Ensemble learning has been applied in many contexts within neural computation, e.g. for sigmoid belief networks [8], where advanced mean field methods such as LR theory or TAP [9] may also be applicable. In this paper, we show how LR theory can be applied to independent component analysis (leA). The performance of this approach is compared to the NMF approach. We observe that NMF may fail for high noise levels and binary sources and for the underdetermined continuous case. In these cases the NMF approach ignores one of the sources and consequently overestimates the noise. The LR approach on the other hand succeeds in all cases studied. The derivation of the mean-field equations are kept completely general and are thus valid for a general source prior (without temporal correlations). The final eqs. show that the mean-field framework may be used to propose ICA algorithms for which the source prior is only defined implicitly. 2 Probabilistic leA Following Ref. [10], we consider a collection of N temporal measurements, X = {Xdt}, where Xdt denotes the measurement at the dth sensor at time t. Similarly, let S = {Smd denote a collection of M mutually independent sources where Sm. is the mth source which in general may have temporal correlations. The measured signals X are assumed to be an instantaneous linear mixing of the sources corrupted with additive Gaussian noise r , that is, X=As+r , (1) where A is the mixing matrix. Furthermore, to simplify this exposition the noise is assumed to be iid Gaussian with variance a 2 . The likelihood of the parameters is then given by, P(XIA, ( 2 ) = ! dSP(XIA, a 2 , S) P(S) , (2) where P(S) is the prior on the sources which might include temporal correlations. We will, however, throughout this paper assume that the sources are temporally uncorrelated. We choose to estimate the mixing matrix A and noise level a2 by Maximum Likelihood (ML-II). The saddlepoint of P(XIA, ( 2 ) is attained at, 810gP~~IA,(2) = 0 A = X(S)T(SST)-l (3) 810gP~~IA,(2) = 0 a2 = D~(Tr(X - ASf(X - AS)) , (4) where (.) denotes an average over the posterior and D is the number of sensors. 3 Mean field theory First, we derive mean field equations using ensemble learning. Secondly, using linear response theory, we obtain improved estimates of the off-diagonal terms of (SST) which are needed for estimating A and a 2 . The following derivation is performed for an arbitrary source prior. 3.1 Ensemble learning We adopt a standard ensemble learning approach and approximate P(S IX A 2) = P(XIA, a 2 , S)P(S) "a P(XIA,a2 ) (5) in a family of product distributions Q(S) = TImt Q(Smt). It has been shown in Ref. [11] that for a Gaussian P(XIA, a 2 , S), the optimal choice of Q(Smt) is given by a Gaussian times the prior: (6) In the following, it is convenient to use standard physics notation to keep everything as general as possible. We therefore parameterize the Gaussian as, P(XIA, a2, S) = P(XIJ, h, S) = Ce~ Tr(ST JS)+Tr(hTS) , (7) where J = _AT AI a2 is the M x M interaction matrix and h = A TXI a2 has the same dimensions as the source matrix S. Note that h acts as an external field from which we can obtain all moments of the sources. This is a property that we will make use of in the next section when we derive the linear response corrections. The Kullback-Leibler divergence between the optimal product distribution Q (S) and the true source posterior is given by KL = ! dSQ(S)In P(SI~:~,a2) = InP(XIA,a2) -lnP(XIA,a2) (8) InP(XIA,a2) = 2)Og! d8P(8)e~>'~tS2+Y~tS + ~ 2: (Jmm Amt)(8~t) mt mt 1 +2 Tr(ST)(J - diag(J)(S) + Tr(h - "If (S) + In C , (9) where P(XIA, a 2) is the naive mean field approximation to the Likelihood and diag(J) is the diagonal matrix of J. The saddlepoints define the mean field equations; oKL o(S) = 0 oKL 0(8;'t) = 0 "I = h + (J - diag(J))(S) The remaining two equations depend explicitly on the source prior, P(S); oK L = 0 (8mt) = _O_IOg! d8mtP(8mt)e~>'~ t s;'t+Y~tS~t 01mt 01mt (10) (11) == fbmt, Amt) (12) oKL = 0 : (82 \ = 2-0-10g!d8 P(8 )e~>'~tS;'t+')'~tS~t (13) OAmt mtl OAmt mt mt . In section 4, we calculate fbmt' Amt) for some of the prior distributions found in the leA literature. 3.2 Linear response theory As mentioned already, h acts as an external field. This makes it possible to calculate the means and covariances as derivatives of log P(XIJ, h), i.e. (8 \ = ologP(XIJ, h) mtl ohmt (14) tt' (8 8 \ (8 \(8 \ _ 02 log P(XIJ, h) _ 0(8mt) Xmm, = mt m't'l mtl m't'l !lh !lh - ~h . U m't' U mt U m't' (15) To derive an equation for X~m" we use eqs. (10), (11) and (12) to get tt' Of(1mt, Amt) 01mt Xmm, = ohm't' Of(1mt, Amt) ( "J tt ~) ~ = L...J mm"Xm"m' + Umm' Ott'· 01mt m",m"¥:m (16) 2 2 2 .. ..: ..•... .. &' .. "''Ij.;~ . .. ~ . X'" a ~, ~r .. X'" a x'" a .. 01#. ~' ~" ••• ,oJ • ... eA_ ....... .. -1 -1 -1 -2 -2 -2 -2 a 2 -2 a 2 -2 a 2 x, X, x, Figure 1: Binary source recovery for low noise level (M = 2, D = 2). Shows from left to right: +/- the column vectors of; the true A (with the observations superimposed); the estimated A (NMF); estimated A (LR). 0.5 -0.5 8. •• , ,, •• Ii> ..... _ ... r!J.. " ~ •• .~ -0.5 a 0.5 x, 0.5 -0.5 -0.5 a 0.5 x, 0.4 i'l . ~ 0.2 \ 0 '3~ '" , > 0.1 \ ,_,_,.l.~ O'------~--~ a 20 iteration 40 Figure 2: Binary source recovery for low noise level (M = 2, D = 2), Shows the dynamics of the fix-point iterations. From left to right; +/- the column vectors of A (NMF); +/- the column vectors of A (LR); variance (72 (solid:NMF, dashed:LR, thick dash-dotted: the true empirical noise variance). We now see that the x-matrix factorizes in time X~ml = ott' X~ml. This is a direct consequence of the fact that the model has no temporal correlations. The above equation is linear and may straightforwardly be solved to yield X~ml = [(At - J)-l]mm' , where we have defined the diagonal matrix At = diag (8fh~ 'Alt) + J11 , ... , 8fhM~,AM.) + JMM) . 8"(lt 8"(Mt (17) At this point is appropriate to explain why linear response theory is more precise than using the factorized distribution which predicts X~ml = 0 for non-diagonal terms. Here, we give an argument that can be found in Parisi's book on statistical field theory [7]: Let us assume that the approximate and exact distribution is close in some sense, i,e. Q(S) - P(SIX, A, (72) = c then (SmtSm1t)ex = (SmtSm1t)ap + O(c). Mean field theory gives a lower bound on the log-Likelihood since K L , eq. (8) is non-negaitive. Consequently, the linear term vanishes in the expansion of the log-Likelihood: log P(XIA, (72) = log P(XIA, (72) + O(c2) . It is therefore more precise to obtain moments of the variables through derivatives of the approximate log-Likelihood, i,e. by linear response. A final remark to complete the picture: if diag(J) in equation eq. (10) is exchanged with At = diag(Alt, ... , AMt) and likewise in the definition of At above we get TAP equations [9], The TAP equation for A is xt = 8fh=t ,A=,) = [(At - J)-l] . mt mm 8"(=t mm 2 2 2 1 :0, ..... : .:.~: : .. X'" a ·:~t.~'~f X'" a X'" a ....... ~,. .. "C.. • • -1 .. .. .. ~-. .:' l.' :::: -1 -1 -2 -2 -2 -2 a 2 -2 a 2 -2 a 2 x, X, x, Figure 3: Binary source recovery for high noise level (M = 2, D = 2). Shows from left to right: +/- the column vectors of; the true A (with the observations superimposed); the estimated A (NMF); estimated A (LR). 0.5 0.5 )( xC\! a eo ................ ..... ~ xN 0 )( --<l.5 -0.5 -0.5 a 0.5 x, * ~ ............... ~ )( -0.5 a 0.5 x, 0.7 0 .6 i'l0.5 iij .~ 0.4 1\------0.3 0.2 '---------a 200 400 600 iteration Figure 4: Binary source recovery for high noise level (M = 2, D = 2). Same plot as in figure 2. 4 Examples In this section we compare the LR approach and the NMF approach on the noisy leA model. The two approaches are demonstrated using binary and continous sources. 4.1 Binary source Independent component analysis of binary sources (e.g. studied in [12]) is considered for data transmission using binary modulation schemes such as MSK or biphase (Manchester) codes. Here, we consider a binary source Smt E { -1,1} with prior distribution P(Smt) = ! [8(Smt - 1) + 8(Smt + 1)]. In this case we get the well known mean field equations (Smt) = tanhbmt). Figures 1 and 2 show the results of the NMF approach as well as LR approach in a low-noise variance setting using two sources (M = 2) and two sensors (D = 2). Figures 3 and 4 show the same but in a high-noise setting. The dynamical plots show the trajectory of the fix-point iteration where 'x' marks the starting point and '0' the final point. Ideally, the noise-less measurements would consist of the four combinations (with signs) of the columns in the mixing matrix. However, due to the noise, the measurement will be scattered around these "prototype" observations. In the low-noise level setting both approaches find good approximations to the true mixing matrix and sources. However, the convergence rate of the LR approach is found to be faster. For high-noise variance the NMF approach fails to recover the true statistics. It is seen that one of the directions in the mixing matrix vanishes which in tum results in overestimating the noise variance. ::= TIJTIJTIJ -5 0 5 -2 0 2 -2 0 2 -2 0 2 X, X, X, Figure 5: Overcomplete continuous source recovery with M 3 and D = 2. Shows from left to right the observations, +/- the column vectors of; the true A; the estimated A (NMF); estimated A (LR). 2 " 2 " 2.5 ~ " ·fJ " 2 "~" , f " u , xN 0 xN 0 .~ 1.5 "'------.. > -1 x <!"" -1 x eI' 1 -, -,_._._._. -,_ .. -2 -2 0.5 -2 0 2 -2 0 2 0 1000 2000 X, X, iteration Figure 6: Overcomplete continuous source recovery with M = 3 and D = 2. Same plot as in figure 2. Note that the initial iteration step for A is very large. 4.2 Continuous Source To give a tractable example which illustrates the improvement by LR, we consider the Gaussian prior P(Smt) ex: exp( -o.S~t!2) (not suitable for source separation). This leads to fbmt, Amt) = 'Ymt/(o. - Amt). Since we have a factorized distribution, ensemble learning predicts (SmtSm't') - (Smt}(Sm't') = 8mm,8tt' (a. - Amt)-l = 8mm,8tt' (a. Jmm)-l, where the second equality follows from eq. (11). Linear response eq. (17) gives (SmtSm't') - (Smt}(Sm't') = 8tt' [(0.1 -J)-l]mm' which is identical with the exact result obtained by direct integration. For the popular choice of prior P(Smt) = ~ s [1], it is not possible to derive 7r cos tnt fbmt. Amt) analytically. However, fbmt. Amt) can be calculated analytically for the very similar Laplace distribution. Both these examples have positive kurtosis. Mean field equations for negative kurtosis can be obtained using the prior P(Smt) ex:= exp( -(Smt - 1-£)2/2) + exp( -(Smt + 1-£)2/2) [1] leading to Figure 5 and 6 show simulations using this source prior with 1-£ = 1 in an overcomplete setting with D = 2 and M = 3. Note that 1-£ = 1 yields a unimodal source distribution and hence qualitatively different from the bimodal prior considered in the binary case. In the overcomplete setting the NMF approach fails to recover the true sources. See [13] for further discussion of the overcomplete case. 5 Conclusion We have presented a general ICA mean field framework based upon ensemble learning and linear response theory. The naive mean-field approach (pure ensemble learning) fails in some cases and we speculate that it is incapable of handling the overcomplete case (more sources than sensors). Linear response theory, on the other hand, succeeds in all the examples studied. There are two directions in which we plan to extend this work: (1) to sources with temporal correlations and (2) to source models defined not by a parametric source prior, but directly in terms of the function j, which defines the mean field equations. Starting directly from the j-function makes it possible to test a whole range of implicitly defined source priors. A detailed analysis of a large selection of constrained and unconstrained source priors as well as comparisons of LR and the TAP approach can be found in [14]. Acknowledgments PHS wishes to thank Mike Jordan for stimulating discussions on the mean field and variational methods. This research is supported by the Swedish Foundation for Strategic Research as well as the Danish Research Councils through the Computational Neural Network Center (CONNECT) and the THOR Center for Neuroinformatics. References [1] T.-W. Lee: Independent Component Analysis, Kluwer Academic Publishers, Boston (1998). [2] A. Belouchrani and J.-F. Cardoso: Maximum Likelihood Source Separation by the ExpectationMaximization Technique: Deterministic and Stochastic Implementation In Proc. NOLTA, 49-53 (1995). [3] D. MacKay: Maximum Likelihood and Covariant Algorithms for Independent Components Analysis. "Draft 3.7" (1996). [4] H. Lappalainen and J.W. Miskin: Ensemble Learning, Advances in Independent Component Analysis, Ed. M. Girolami, In press (2000). [5] C. Peterson and J. Anderson: A Mean Field Theory Learning Algorithm for Neural Networks, Complex Systems 1, 995- 1019 (1987). [6] H. J. Kappen and F. B. Rodriguez: Efficient Learning in Boltzmann Machines Using Linear Response Theory, Neural Computation 10,1137-1156 (1998). [7] G. Parisi: Statistical Field Theory, Addison Wesley, Reading Massachusetts (1988). [8] L. K. Saul, T. Jaakkola and M. 1. Jordan: Mean Field Theory of Sigmoid Belief Networks, Journal of Artificial Intelligence Research 4, 61- 76 (1996). [9] M. Opper and O. Winther: Tractable Approximations for Probabilistic Models: The Adaptive TAP Mean Field Approach, Submitted to Phys. Rev. Lett. (2000). [10] L. K. Hansen: Blind Separation of Noisy Image Mixtures, Advances in Independent Component Analysis, Ed. M. Girolami, In press (2000). [11] L. Csat6, E. Fokoue, M. Opper, B. Schottky and O. Winther: Efficient Approaches to Gaussian Process Classification, in Advances in Neural Information Processing Systems 12 (NIPS'99), Eds. S. A. Solla, T. K. Leen, and K.-R. Muller, MIT Press (2000). [12] A.-J. van der Veen: Analytical Method for Blind Binary Signal Separation IEEE Trans. on Signal Processing 45(4) 1078- 1082 (1997). [13] M. S. Lewicki and T. J. Sejnowski: Learning Overcomplete Representations, Neural Computation 12, 337-365 (2000). [14] P. A. d. F. R. H0jen-S0rensen, O. Winther and L. K. Hansen: Mean Field Approaches to Independent Component Analysis, In preparation.
|
2000
|
112
|
1,768
|
Regularization with Dot-Product Kernels Alex J. SIDola, Zoltan L. Ovari, and Robert C. WilliaIDson Department of Engineering Australian National University Canberra, ACT, 0200 Abstract In this paper we give necessary and sufficient conditions under which kernels of dot product type k(x, y) = k(x . y) satisfy Mercer's condition and thus may be used in Support Vector Machines (SVM), Regularization Networks (RN) or Gaussian Processes (GP). In particular, we show that if the kernel is analytic (i.e. can be expanded in a Taylor series), all expansion coefficients have to be nonnegative. We give an explicit functional form for the feature map by calculating its eigenfunctions and eigenvalues. 1 Introduction Kernel functions are widely used in learning algorithms such as Support Vector Machines, Gaussian Processes, or Regularization Networks. A possible interpretation of their effects is that they represent dot products in some feature space :7, i.e. k(x,y) = ¢(x)· ¢(y) (1) where ¢ is a map from input (data) space X into:7. Another interpretation is to connect ¢ with the regularization properties of the corresponding learning algorithm [8]. Most popular kernels can be described by three main categories: translation invariant kernels [9] k(x, y) = k(x - y), (2) kernels originating from generative models (e.g. those of Jaakkola and Haussler, or Watkins), and thirdly, dot-product kernels k(x, y) = k(x . y). (3) Since k influences the properties of the estimates generated by any of the algorithms above, it is natural to ask which regularization properties are associated with k. In [8, 10, 9] the general connections between kernels and regularization properties are pointed out, containing details on the connection between the Fourier spectrum of translation invariant kernels and the smoothness properties of the estimates. In a nutshell, the necessary and sufficient condition for k(x - y) to be a Mercer kernel (i.e. be admissible for any of the aforementioned kernel methods) is that its Fourier transform be nonnegative. This also allowed for an easy to check criterion for new kernel functions. Moreover, [5] gave a similar analysis for kernels derived from generative models. Dot product kernels k(x . y), on the other hand, have been eluding further theoretical analysis and only a necessary condition [1] was found, based on geometrical considerations. Unfortunately, it does not provide much insight into smoothness properties of the corresponding estimate. Our aim in the present paper is to shed some light on the properties of dot product kernels, give an explicit equation how its eigenvalues can be determined, and, finally, show that for analytic kernels that can be expanded in terms of monomials ~n or associated Legendre polynomials P~(~) [4], i.e. 00 00 k(x, y) = k(x· y) with k(~) = L anC or k(~) = L bnP~(~) (4) n=O n=O a necessary and sufficient condition is an ~ 0 for all n E N if no assumption about the dimensionality of the input space is made (for finite dimensional spaces of dimension d, the condition is that bn ~ 0). In other words, the polynomial series expansion in dot product kernels plays the role of the Fourier transform in translation invariant kernels. 2 Regularization, Kernels, and Integral Operators Let us briefly review some results from regularization theory, needed for the further understanding of the paper. Many algorithms (SVM, GP, RN, etc.) can be understood as minimizing a regularized risk functional Rreg[f] := Remp[f] + AO[f] (5) where Remp is the tmining error of the function f on the given data, A > 0 and O[f] is the so-called regularization term. The first term depends on the specific problem at hand (classification, regression, large margin algorithms, etc.), A is generally adjusted by some model selection criterion, and O[f] is a nonnegative functional of f which models our belief which functions should be considered to be simple (a prior in the Bayesian sense or a structure in a Structuraillisk Minimization sense). 2.1 Regularization Operators One possible interpretation of k is [8] that it leads to regularized risk functionals where O[f] = ~IIPfI12 or equivalently (Pk(x, .), Pk(y,')) = k(x, y). (6) Here P is a regularization operator mapping functions f on X into a dot product space (we choose L2(X)), The following theorem allows us to construct explicit operators P and it provides a criterion whether a symmetric function k(x, y) is suitable. Theorem 1 (Mercer [3]) Suppose k E Loo(X2) such that the integml opemtor Tk : L 2(X) -t L 2 (X), Tkf(-) := Ix k(·,x)f(x)dp,(x) (7) is positive. Let «Pj E L2(X) be the eigenfunction of Tk with eigenvalue Aj =I- 0 and normalized such that II «P j II L2 = 1 and let «P j denote its complex conjugate. Then 1. (Aj(T))j E fl. 2. «Pj E Loo(X) and SUPj II«pjIILoo < 00. 3. k(x,x') = ~ Aj«Pj(X)«Pj(x') holds for almost all (x,x'), where the series jEN converges absolutely and uniformly for almost all (x, x'). This means that by finding the eigensystem (Ai, «Pi) of Tk we can also determine the regularization operator P via [8] (8) The eigensystem (Ai, «Pi) tells us which functions are considered "simple" in terms of the operator P. Consequently, in order to determine the regularization properties of dot product kernels we have to find their eigenfunctions and eigenvalues. 2.2 Specific Assumptions Before we diagonalize Tk for a given kernel we have yet to specify the assumptions we make about the measure J.t and the domain of integration X. Since a suitable choice can drastically simplify the problem we try to keep as much of the symmetries imposed by k (x . y) as possible. The predominant symmetry in dot product kernels is rotation invariance. Therefore we set choose the unit ball in lRd X:= Ud := {xix E lRd and IIxl12 ::; I}. (9) This is a benign assumption since the radius can always be adjusted by rescaling k(x· y) --+ k((Ox)· (Oy)). Similar considerations apply to translation. In some cases the unit sphere in lR: is more amenable to our analysis. There we choose X:= Sd-1 := {xix E lRd and IIxl12 = I}. (10) The latter is a good approximation of the situation where dot product kernels perform best if the training data has approximately equal Euclidean norm (e.g. in images or handwritten digits). For the sake of simplicity we will limit ourselves to (10) in most of the cases. Secondly we choose J.t to be the uniform measure on X. This means that we have to solve the following integral equation: Find functions «Pi : L2(X) --+ lR together with coefficients Ai such that Tk«Pi(X) := Ix k(x· y)«pi(y)dy = Ai«Pi(X). 3 Orthogonal Polynomials and Spherical Harmonics Before we can give eigenfunctions or state necessary and sufficient conditions we need some basic relations about Legendre Polynomials and spherical harmonics. Denote by Pn(~) the Legendre Polynomials and by P~(~) the associated Legendre Polynomials (see e.g. [4] for details). They have the following properties • The polynomials Pn(~) and P~(~) are of degree n, and moreover Pn := P~ • The (associated) Legendre Polynomials form an orthogonal basis with r1 d d 2 d-S ISd-11 1 1-1 Pn(~)Pm(~)(I- ~ ) 2 d~ = ISd-21 N(d,n/m,n. (11) I I 2.".dj 2 ( ) Here Sd-1 = I'(d72) denotes the surface of Sd-b and N d, n denotes the multiplicity of spherical harmonics of order n on Sd-b i.e. N(d,n) = 2ntd-2 (ntd-3). n n-1 • This admits the orthogonal expansion of any analytic function k(~) on [-1,1] into P~ by Moreover, the Legendre Polynomials may be expanded into an orthonormal basis of spherical harmonics Y':,j by the Funk-Heeke equation (cf. e.g. [4]) to obtain IS I N(d,n) P~(x' y) = N(~~~) ~ Y:'j(x)Y:'j(y) (13) where Ilxll = Ilyll = 1 and moreover 1 Y:'j(X)Y':',j,(x)dx = On,n,Oj,j" Sd - l (14) 4 Conditions and Eigensystems on Sd- l Schoenberg [7] gives necessary and sufficient conditions under which a function k(x . y) defined on Sd-l satisfies Mercer's condition. In particular he proves the following two theorems: Theorem 2 (Dot Product Kernels in Finite Dimensions) A kernel k(x· y) defined on Sd-l x Sd-l satisfies Mercer's condition if and only if its expansion into Legendre polynomials P~ has only nonnegative coefficients, i. e. 00 k(~) = L bnP~(~) with bn :::: O. (15) i=O Theorem 3 (Dot Product Kernels in Infinite Dimensions) A kernel k(x·y) defined on the unit sphere in a Hilbert space satisfies Mercer's condition if and only if its Taylor series expansion has only nonnegative coefficients: 00 k(~) = L anC with an :::: O. (16) i=O Therefore, all we have to do in order to check whether a particular kernel may be used in a SV machine or a Gaussian Process is to look at its polynomial series expansion and check the coefficients. This will be done in Section 5. Before doing so note that (16) is a more stringent condition than (15). In other words, in order to prove Mercer's condition for arbitrary dimensions it suffices to show that the Taylor expansion contains only positive coefficients. On the other hand, in order to prove that a candidate of a kernel function will never satisfy Mercer's condition, it is sufficient to show this for (15) where P~ = Pm i.e. for the Legendre Polynomials. We conclude this section with an explicit representation ofthe eigensystem of k(x·y). It is given by the following lemma: Lemma 4 (Eigensystem of Dot Product Kernels) Denote by k(x·y) a kernel on Sd-l x Sd-l satisfying condition (15) of Theorem 2. Then the eigensystem of k is given by 'IIn,j = Y,:;'j with eigenvalues An,j = an ~~~~) of multiplicity N(d,n). (17) In other words, N(d,n) determines the regularization properties of k(x· y). Proof Using the Funk-Heeke formula (13) we may expand (15) further into Spherical Harmonics Y:!,j' The latter, however, are orthonormal, hence computing the dot product of the resulting expansion with Y:!,j (y) over Sd-l leaves only the coefficient Y:!,j (x) J:(~~~~ which proves that Y:!,j are eigenfunctions of the integral operator Tk . • In order to obtain the eigensystem of k(x . y) on Ud we have to expand k into k(x· y) = L:,n=o(llxllllyll)'np~ (~.~) and expand'll into 'II(llxll)'11 (~). The latter is very technical and is thus omitted. See [6] for details. 5 Examples and Applications In the following we will analyze a few kernels and state under which conditions they may be used as SV kernels. Example 1 (Homogeneous Polynomial Kernels k(x, y) = (x· y)P) It is well known that this kernel satisfies Mercer's condition for pEN. We will show that for p ¢ N this is never the case. Thus we have to show that (15) cannot hold for an expansion in terms of Legendre Polynomials (d = 3). From [2, 7.126.1J we obtain for k(x, y) = lelP (we need lei to make k well-defined). P. P Z n even / 1 J7Tr(p + 1) . -1 n(e)lel ~ - 2Pr (1 + ~ - ~) r G + ~ + ~) f . (18) For odd n the integral vanishes since Pn(-e) = (-I)npn(e). In order to satisfy (15), the integral has to be nonnegative for all n. One can see that r (1 + ~ ~) is the only term in (18) that may change its sign. Since the sign of the r function alternates with period 1 for x < 0 (and has poles for negative integer arguments) we cannot find any p for which n = 2l~ + IJ and n = 2r~ + 11 correspond to positive values of the integrnl. Example 2 (Inhomogeneous Polynomial Kernels k(x, y) = (x· y + I)P) Likewise we might conjecture that k(e) = (1 + e)p is an admissible kernel for all p> O. Again, we expand k in a series of Legendre Polynomials to obtain [2, 7.127J / 1 2P+lr2(p + 1) -1 Pn(e)(e + I)Pde = r(p + 2 + n)r(p + 1 - n)' (19) For pEN all terms with n > p vanish and the remainder is positive. For noninteger p, however, (19) may change its sign. This is due to r(p + 1 - n). In particular, for any p ¢ N (with p > 0) we have r(p + 1- n) < 0 for n = rp1 + 1. This violates condition (15), hence such kernels cannot be used in SV machines either. Example 3 (Vovk's Real Polynomial k(x,y) = 11~.5(~~K with pEN) This kernel can be written as k(~) = E::~ ~n, hence all the coefficients ai = 1 which means that this kernel can be used regardless of the dimensionality of the input space. Likewise we can analyze the an infinite power series: Example 4 (Vovk's Infinite Polynomial k(x,y) = (1- (x· y»-l) This kernel can be written as k(~) = E:=o ~n, hence all the coefficients ai = 1. It suggests poor genemlization properties of that kernel. Example 5 (Neural Networks Kernels k(x,y) = tanh(a + (x· y))) It is a longstanding open question whether kernels k(~) = tanh(a +~) may be used as SV kernels, or, for which sets of pammeters this might be possible. We show that is impossible for any set of pammeters. The technique is identical to the one of Examples 1 and 2: we have to show that k fails the conditions of Theorem 2. Since this is very technical (and is best done by using computer algebm progmms, e.g. Maple), we refer the reader to [6J for details and explain for the simpler case of Theorem 3 how the method works. Expanding tanh(a +~) into a Taylor series yields tanh a + (: 1 _ (:2 tanha _ ~(1- tanh2 a)(I- 3tanh2 a) + 0«(:4) " cosh' a "cosh' a 3 " (20) Now we analyze (20) coefficient-wise. Since all of them have to be nonnegative we obtain from the first term a E JO' 00), the third term a E (-00,0], and finally from the fourth term lal E [arctanh 3' arctanh 1]. This leaves us with a E 0, hence under no conditions on its pammeters the kernel above satisfies Mercer's condition. 6 Eigensystems on Ud In order to find the eigensystem of Tk on Ud we have to find a different representation of k where the radial part Ilxllllyll and the angular part ~ = (~ . ~) are factored out separately. We assume that k(x· y) can be written as 00 (21) n=O where Kn are polynomials. To see that we can always find such an expansion for analytic functions, first expand k in a Taylor series and then expand each coefficient (1IxIIIIYII~)n into (1Ixllllyll)nEj=ocj(d,n)Pf(~). Rearranging terms into a series of Pf gives expansion (21). This allows us to factorize the integral operator into its radial and its angular part. We obtain the following theorem: Theorem 5 (Eigenfunctions of Tk on Ud) For any kernel k with expansion (21) the eigensystem of the integml opemtor Tk on Ud is given by CPn,j,!(x) = Y:;'j (~) <Pn,!(llxll) (22) with eigenvalues An,j,! = J:(~~~\ An,/, and multiplicity N(d, n), where (<Pn,t. An,/) is the eigensystem of the integml opemtor 101 r~-lKn(r",ry)<pn,!(r",)dr", = An,!<Pn,/(ry). (23) In general, (23) cannot be solved analytically. However, the accuracy of numerically solving (23) (finite integral in one dimension) is much higher than when diagonalizing Tk directly. Proof All we have to do is split the integral fUd dx into fol Td - 1dT fSd_1 dO.. Moreover note that since Tk commutes with the group of rotations it follows from group theory [4] that we may separate the angular and the radial part in the eigenfunctions, hence use the ansatz cp(x) = CPo (~) 4>(llxll). Next apply the Funk-Hecke equation (13) to expand the associated Legendre Polynomials P~ into the spherical harmonics Y':'i . As in Lemma 4 this leads to the spherical harmonics as the angular part of the eigensystem. The remaining radial part is then (23). See [6] for more details. • This leads to the eigensystem of the homogeneous polynomial kernel k(x, y) = (x· y)P: if we use (18) in conjunction with (12) to expand ~P into a series of P~(~) we obtain an expansion of type (21) where all Kn(T",Ty) ex: (T",Ty)P for n ~ p and Kn(T",Ty) = 0 otherwise. Hence, the only solution to (23) is 4>n(T) = Td, thus CPn,j (x) = IlxlIPY':'i (~). Eigenvalues can be obtained in a similar way. 7 Discussion In this paper we gave conditions on the properties of dot product kernels, under which the latter satisfy Mercer's condition. While the requirements are relatively easy to check in the case where data is restricted to spheres (which allowed us to prove that several kernels never may be suitable SV kernels) and led to explicit formulations for eigenvalues and eigenfunctions, the corresponding calculations on balls are more intricate and mainly amenable to numerical analysis. Acknowledgments: AS was supported by the DFG (Sm 62-1). The authors thank Bernhard Sch6lkopf for helpful discussions. References [1] C. J. C. Burges. Geometry and invariance in kernel based methods. In B. SchOlkopf, C. J . C. Burges, and A. J . Smola, editors, Advances in Kernel Methods Support Vector Learning, pages 89-116, Cambridge, MA, 1999. MIT Press. [2] I. S. Gradshteyn and I. M. Ryzhik. Table of integrals, series, and products. Academic Press, New York, 1981. [3] J. Mercer. Functions of positive and negative type and their connection with the theory of integral equations. Philos. Trans. Roy. Soc. London, A 209:415-446, 1909. [4] C. Millier. Analysis of Spherical Symmetries in Euclidean Spaces, volume 129 of Applied Mathematical Sciences. Springer, New York, 1997. [5] N. Oliver, B. Scholkopf, and A.J. Smola. Natural regularization in SVMs. In A.J. Smola, P.L. Bartlett, B. Scholkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 51 - 60, Cambridge, MA, 2000. MIT Press. [6] Z. Ovari. Kernels, eigenvalues and support vector machines. Honours thesis, Australian National University, Canberra, 2000. [7] I. Schoenberg. Positive definite functions on spheres. Duke Math. J., 9:96-108, 1942. [8] A. Smola, B. Scholkopf, and K.-R. Miiller. The connection between regularization operators and support vector kernels. Neural Networks, 11:637-649, 1998. [9] G. Wahba. Spline Models for Observational Data, volume 59 of CBMS-NSF Regional Conference Series in Applied Mathematics. SIAM, Philadelphia, 1990. [10] C. K. I. Williams. Prediction with Gaussian processes: From linear regression to linear prediction and beyond. In M. I. Jordan, editor, Learning and Inference in Graphical Models. Kluwer, 1998.
|
2000
|
113
|
1,769
|
From Margin To Sparsity Thore Graepel, Ralf Herbrich Computer Science Department Technical University of Berlin Berlin, Germany {guru, ralfh)@cs.tu-berlin.de Robert C. Williamson Department of Engineering Australian National University Canberra, Australia Bob. Williamson@anu.edu.au Abstract We present an improvement of Novikoff's perceptron convergence theorem. Reinterpreting this mistake bound as a margin dependent sparsity guarantee allows us to give a PAC-style generalisation error bound for the classifier learned by the perceptron learning algorithm. The bound value crucially depends on the margin a support vector machine would achieve on the same data set using the same kernel. Ironically, the bound yields better guarantees than are currently available for the support vector solution itself. 1 Introduction In the last few years there has been a large controversy about the significance of the attained margin, i.e. the smallest real valued output of a classifiers before thresholding, as an indicator of generalisation performance. Results in the YC, PAC and luckiness frameworks seem to indicate that a large margin is a pre- requisite for small generalisation error bounds (see [14, 12]). These results caused many researchers to focus on large margin methods such as the well known support vector machine (SYM). On the other hand, the notion of sparsity is deemed important for generalisation as can be seen from the popularity of Occam's razor like arguments as well as compression considerations (see [8]). In this paper we reconcile the two notions by reinterpreting an improved version of Novikoff's well known perceptron convergence theorem as a sparsity guarantee in dual space: the existence of large margin classifiers implies the existence of sparse consistent classifiers in dual space. Even better, this solution is easily found by the perceptron algorithm. By combining the perceptron mistake bound with a compression bound that originated from the work of Littlestone and Warmuth [8] we are able to provide a PAC like generalisation error bound for the classifier found by the perceptron algorithm whose size is determined by the magnitude of the maximally achievable margin on the dataset. The paper is structured as follows: after introducing the perceptron in dual variables in Section 2 we improve on Novikoff's percept ron convergence bound in Section 3. Our main result is presented in the subsequent section and its consequences on the theoretical foundation of SYMs are discussed in Section 5. 2 (Dual) Kernel Perceptrons We consider learning given m objects X = {Xl, ... , Xm } E xm and a set Y = {Yl, . .. Ym} E ym drawn iid from a fixed distribution PXY = Pz over the space X x {-I, + I} = Z of input-output pairs. Our hypotheses are linear classifiers X f-t sign ((w, 4> (x))) in some fixed feature space K ~ £~ where we assume that a mapping 4> : X --+ K is chosen a prioril . Given the features ¢i : X --+ ~ the classical (primal) percept ron algorithm aims at finding a weight vector w E K consistent with the training data. Recently, Vapnik [14] and others in their work on SVMs have rediscovered that it may be advantageous to learn in the dual representation (see [1]), i.e. expanding the weight vector in terms of the training data m m Wo: = L>~i4> (Xi) = 2: ctiXi, (1) i=l i=l and learn the m expansion coefficients a E ~m rather than the components of w E K. This is particularly useful if the dimensionality n = dim (K) of the feature space K is much greater (or possibly infinite) than the number m of training points. This dual representation can be used for a rather wide class of learning algorithms (see [15]) in particular if all we need for learning is the real valued output (w, Xi)/C of the classifier at the m training points Xl, . .. , xm . Thus it suffices to choose a symmetric function k : X x X --+ ~ called kernel and to ensure that there exists a mapping 4>k : X --+ K such that Vx,x' EX: k (x,x') = (4)k (x) ,4>k (x'))/C . (2) A sufficient condition is given by Mercer's theorem. Theorem 1 (Mercer Kernel [9, 7]). Any symmetric function k E Loo (X x X) that is positive semidefinite, i.e. Vf E L2 (X): Ix Ix k(x,x') f (x) f (x') dx dx';::: 0, is called a Mercer kernel and has the following property: if 'if;i E L2 (X) solve the eigenvalue problem Ix k (x, x') 'if;i (x') dx' = Ai'if;dx) with Ix 'if;; (x) dx = 1 and Vi f:. j : Ix 'if;i (x) 'if;j (x) dx = 0 then k can be expanded in a uniformly convergent series, i. e. 00 k(x,x') = 2:Ai'if;dx) 'if;dx') . i=l In order to see that a Mercer kernel fulfils equation (2) consider the mapping 4>k (x) = ( A 'if;l (x), y'):;'if;2 (x), ... ) (3) whose existence is ensured by the third property. Finally, the percept ron learning algorithm we are going to consider is described in the following definition. Definition 1 (Perceptron Learning). The perceptron learning procedure with the fixed learning rate TJ E ~+ is as follows: 1. Start in step zero, i.e. t = 0, with the vector at = O. 2. If there exists an index i E {I, ... , m} such that Yi (w 0:., Xi) /C :::; 0 then (at+l)i = (at)i + TJYi ¢:> wO:.+ 1 = wo:. + TJYiXi · (4) and t t- t + 1. lSomtimes, we abbreviate 4> (x) by x always assuming 4> is fixed. 3. Stop, if there is no i E {I, . .. , m} such that Yi (Wat' Xi) J( ~ o. Other variants of this algorithm have been presented elsewhere (see [2, 3]). 3 An Improvement of N ovikoff's Theorem In the early 60's Novikoff [10) was able to give an upper bound on the number of mistakes made by the classical perceptron learning procedure. Two years later, this bound was generalised to feature spaces using Mercer kernels by Aizerman et al. [1). The quantity determining the upper bound is the maximally achievable unnormalised margin maxaElR.~ 'Yz (a) normalised by the total extent R(X) of the data in feature space, i.e. R (X) = maxxiEX IlxillJ(. Definition 2 (Unnormalised Margin). Given a training set Z = (X, Y) and a vector a E IRm the unnormalised margin 'Yz (a) is given by ( ) . Yi (Wa,Xi)J( 'Yz a = mm ( Xi,y;)EZ IlwallJ( Theorem 2 (Novikoffs Percept ron Convergence Theorem 110,1]). Let Z = (X, Y) be a training set of size m. Suppose that there exists a vector a* E IRm such that 'Yz (a*) > O. Then the number of mistakes made by the perceptron algorithm in Definition 1 on Z is at most ( R(X) )2 'Yz (a*) Surprisingly, this bound is highly influenced by the data point Xi E X with the largest norm IIXil1 albeit rescaling of a data point would not change its classification. Let us consider rescaling of the training set X before applying the perceptron algorithm. Then for the normalised training set we would have R (Xnorm ) = 1 and 'Yz (a) would change into the normalised margin rz (a) first advocated in [6). Definition 3 (Normalised Margin). Given a training set Z = (X, Y) and a vector a E IRm the normalised margin rz (a) is given by r ( ) . Yi (wa, Xi)J( za = mm . (Xi,y;)EZ IIwallJ( IIXillJ( By definition, for all Xi E X we have R (X) 2: Ilxi IIJ(. Hence for any a E IRm and all (Xi ,Yi) E Z such that Yi (Wa,Xi)J( > 0 R(X) > IIXillJ( 1 Yi(Wa,Xi)1C Yi(Wa,Xi)1C Yi(Wa,Xi)f,' IIwall lC IlwalllC IIwalldxirlC which immediately implies for all Z = (X, Y) E zm such that 'Yz (a) > 0 R(X) > _1_. (5) 'Yz (a) - rz (a) Thus when normalising the data in feature space, i.e. k (x Xl) _ k (X,XI) norm , ...jk(x,x).k(X',X') ' the upper bound on the number of steps until convergence of the classical perceptron learning procedure of Rosenblatt [11) is provably decreasing and is given by the squared r.h.s of (5). Considering the form of the update rule (4) we observe that this result not only bounds the number of mistakes made during learning but also the number 110:110 of non-zero coefficients in the 0: vector. To be precise, for 'T/ = 1 it bounds the £1 norm 110:111 of the coefficient vector 0: which, in turn, bounds the zero norm 110:110 from above for all vectors with integer components. Theorem 2 thus establishes a relation between the existence of a large margin classifier w* and the sparseness of any solution found by the perceptron algorithm. 4 Main Result In order to exploit the guaranteed sparseness of the solution of a kernel perceptron we make use of the following lemma to be found in [8, 4). Lemma 1 (Compression Lemma). Fix d E {l, .. . ,m}. For any measure Pz, the probability that m examples Z drawn iid according to Pz will yield a classifier 0: (Z) learned by the perceptron algorithm with 110: (Z)llo = d whose generalisation error PXY [Y (w a(Z), <P (X») /C ::; 0] is greater than c is at most (~)(l_c)m-d. (6) Proof. Since we restrict the solution 0: (Z) with generalisation error greater than c only to use d points Zd c:;; Z but still to be consistent with the remaining set Z \ Zd, this probability is at most (1 - c)m-d for a fixed subset Zd. The result follows by the union bound over all (r;;) subsets Zd. Intuitively, the consistency on the m - d unused training points witnesses the small generalisation error with high probability. 0 If we set (6) to ~ and solve for c we have that with probability at most ~ over the random draw of the training set Z the percept ron learning algorithm finds a vector 0: such that 110:110 = d and whose generalisation error is greater than c (m, d) = m~d (In ((r;;)) + In (m) + In (~)) . Thus by the union bound, if the perceptron algorithm converges, the probability that the generalisation error of its solution is greater than c (m, 110:110) is at most 8. We have shown the following sparsity bounds also to be found in [4). Theorem 3 (Generalisation Error Bound for Perceptrons). For any measure Pz, with probability at least 1 - 8 over the random draw of the training set Z of size m, if the perceptron learning algorithm converges to the vector 0: of coefficients then its generalisation error PXY [Y (w a(Z), <p (X») /C ::; 0] is less than (7) This theorem in itself constitutes a powerful result and can easily be adopted to hold for a large class of learning algorithms including SVMs [4). This bound often outperforms margin bounds for practically relevant training set sizes, e.g. m < 100 000. Combining Theorem 2 and Theorem 3 thus gives our main result. Theorem 4 (Margin Bound). For any measure P z, with probability at least 1 c5 over the random draw of the training set Z of size m, if there exists a vector u* such that ~* = I (~~~~:)rl ~ m then the generalisation error PXY [Y(wo:(Z),I/>(X))x: ~ 0] of the classifier u found by the perceptron algorithm is less than m~~* (In ((:.)) +In(m)+ln(D) (8) The most intriguing feature of this result is that the mere existence of a large margin classifier u* is sufficient to guarantee a small generalisation error for the solution u of the perceptron although its attained margin ~z (u) is likely to be much smaller than ~z (u*). It has long been argued that the attained margin ~z (u) itself is the crucial quantity controlling the generalisation error of u. In light of our new result if there exists a consistent classifier u* with large margin we know that there also exists at least one classifier u with high sparsity that can efficiently be found using the percept ron algorithm. In fact, whenever the SYM appears to be theoretically justified by a large observed margin, every solution found by the perceptron algorithm has a small guaranteed generalisation error mostly even smaller than current bounds on the generalisation error of SYMs. Note that for a given training sample Z it is not unlikely that by permutation of Z there exist o ((,:'!)) many different consistent sparse classifiers u. 5 Impact on the Foundations of Support Vector Machines Support vector machines owe their popularity mainly to their theoretical justification in the learning theory. In particular, two arguments have been put forward to single out the solutions found by SYMs [14, p. 139]: SYM (optimal hyperplanes) can generalise because 1. the expectation of the data compression is large. 2. the expectation of the margin is large. The second reason is often justified by margin results (see [14, 12]) which bound the generalisation of a classifier u in terms of its own attained margin ~z (u). If we require the slightly stronger condition that ~* < ~, n 2: 4, then our bound (8) for solutions of percept ron learning can be upper bounded by ~ (~*lnC::n)+ln(mn~1)+ln(c5n1~1))' which has to be compared with the PAC margin bound (see [12, 5]) ~ (64~*log2 (:::. ) log2 (32m) + log2 (2m) + log2 (~) ) Despite the fact that the former result also holds true for the margin rz (u*) (which could loosely be upper bounded by (5)) • the PAC margin bound's decay (as a function of m) is slower by a log2 (32m) factor, digit o 1 2 3 4 5 6 7 8 9 perceptron 0.2 0.2 0.4 0.4 0.4 0.4 0.4 0.5 0.6 0.7 lIalio 740 643 1168 1512 1078 1277 823 1103 1856 1920 mistakes 844 843 1345 1811 1222 1497 960 1323 2326 2367 bound 6.7 6.0 9.8 12.0 9.2 10.5 7.4 9.4 14.3 14.6 SVM 0.2 0.1 0.4 0.4 0.4 0.5 0.3 0.4 0.5 0.6 Iiallo 1379 989 1958 1900 1224 2024 1527 2064 2332 2765 bound 11.2 8.6 14.9 14.5 10.2 15.3 12.2 15.5 17.1 19.6 Table 1: Results of kernel perceptrons and SVMs on NIST (taken from [2, Table 3]). The kernel used was k (x, x') = ((x, x') x + 1)4 and m = 60000. For both algorithms we give the measured generalisation error (in %), the attained sparsity and the bound value (in %, 8 = 0.05) of (7) . • for any m and almost any 8 the margin bound given in Theorem 4 guarantees a smaller generalisation error . • For example, using the empirical value K,* ~ 600 (see [14, p. 153]) in the NIST handwritten digit recognition task and inserting this value into the PAC margin bound, it would need the astronomically large number of m > 410 743 386 to obtain a bound value of 0.112 as obtained by (3) for the digit "0" (see Table 1). With regard to the first reason, it has been confirmed experimentally that SVMs find solutions which are sparse in the expansion coefficients o. However, there cannot exist any distribution- free guarantee that the number of support vectors will in fact be sma1l2 . In contrast, Theorem 2 gives an explicit bound on the sparsity in terms of the achievable margin ,z (0*). Furthermore, experimental results on the NIST datasets show that the sparsity of solution found by the perceptron algorithm is consistently (and often by a factor of two) greater than that of the SVM solution (see [2, Table 3] and Table 1). 6 Conclusion We have shown that the generalisation error of a very simple and efficient learning algorithm for linear classifiers the perceptron algorithm can be bounded by a quantity involving the margin of the classifier the SVM would have found on the same training data using the same kernel. This result implies that the SVM solution is not at all singled out as being superior in terms of provable generalisation error. Also, the result indicates that sparsity of the solution may be a more fundamental property than the size of the attained margin (since a large value of the latter implies a large value of the former). Our analysis raises an interesting question: having chosen a good kernel, corresponding to a metric in which inter- class distances are great and intra- class distances are short, in how far does it matter which consistent classifier we use? Experimental 2Consider a distribution PXY on two parallel lines with support in the unit ball. Suppose that their mutual distance is ../2. Then the number of support vectors equals the training set size whereas the perceptron algorithm never uses more than two points by Theorem 2. One could argue that it is the number of essential support vectors [13] that characterises the data compression of an SVM (which would also have been two in our example). Their determination, however, involves a combinatorial optimisation problem and can thus never be performed in practical applications. results seem to indicate that a vast variety of heuristics for finding consistent classifiers, e.g. kernel Fisher discriminant, linear programming machines, Bayes point machines, kernel PCA & linear SVM, sparse greedy matrix approximation perform comparably (see http://www . kernel-machines. org/). Acknowledgements This work was done while TG and RH were visiting the ANU Canberra. They would like to thank Peter Bartlett and Jon Baxter for many interesting discussions. Furthermore, we would like to thank the anonymous reviewer, Olivier Bousquet and Matthias Seeger for very useful remarks on the paper. References [I] M. Aizerman, E. Braverman, and L. Rozonoer. Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control, 25:821- 837, 1964. [2] Y. Freund and R. E. Schapire. Large margin classification using the perceptron algorithm. Machine Learning, 1999. [3] T. Friess, N. Cristianini, and C. Campbell. The Kernel-Adatron: A fast and simple learning procedure for Support Vector Machines. In Proceedings of the 15- th International Conference in Machine Learning, pages 188- 196, 1998. [4] T. Graepel, R. Herbrich, and J. Shawe-Taylor. Generalisation error bounds for sparse linear classifiers. In Proceedings of the Thirteenth Annual Conference on Computational Learning Theory, pages 298- 303, 2000. in press. [5] R. Herbrich. Learning Linear Classifiers - Theory and Algorithms. PhD thesis, Technische Universitiit Berlin, 2000. accepted for publication by MIT Press. [6] R. Herbrich and T. Graepel. A PAC-Bayesian margin bound for linear classifiers: Why SVMs work. In Advances in Neural Information System Processing 13, 2001. [7] H. Konig. Eigenvalue Distribution of Compact Operators. Birkhiiuser, Basel, 1986. [8] N. Littlestone and M. Warmuth. Relating data compression and learn ability. Technical report, University of California Santa Cruz, 1986. [9] T. Mercer. Functions of positive and negative type and their connection with the theory of integral equations. Transaction of London Philosophy Society (A), 209:415446, 1909. [10] A. Novikoff. On convergence proofs for perceptrons. In Report at the Symposium on Mathematical Theory of Automata, pages 24- 26, Politechnical Institute Brooklyn, 1962. [11] M. Rosenblatt. Principles of neurodynamics: Perceptron and Theory of Brain Mechanisms. Spartan- Books, Washington D.C., 1962. [12] J. Shawe-Taylor, P. L. Bartlett, R. C. Williamson, and M. Anthony. Structural risk minimization over data-dependent hierarchies. IEEE Transactions on Information Theory, 44(5):1926- 1940, 1998. [13] V. Vapnik. Statistical Learning Theory. John Wiley and Sons, New York, 1998. [14] V. Vapnik. The Nature of Statistical Learning Theory. Springer, second edition, 1999. [15] G. Wahba. Support Vector Machines, Reproducing Kernel Hilbert Spaces and the randomized GACV. Technical report, Department of Statistics, University of Wisconsin, Madison, 1997. TR- NO- 984.
|
2000
|
114
|
1,770
|
On a Connection between Kernel PCA and Metric Multidimensional Scaling Christopher K. I. WilliaIns Division of Informatics The University of Edinburgh 5 Forrest Hill, Edinburgh EH1 2QL, UK c.k.i.williams~ed.ac.uk http://anc.ed.ac.uk Abstract In this paper we show that the kernel peA algorithm of Sch6lkopf et al (1998) can be interpreted as a form of metric multidimensional scaling (MDS) when the kernel function k(x, y) is isotropic, i.e. it depends only on Ilx - yll. This leads to a metric MDS algorithm where the desired configuration of points is found via the solution of an eigenproblem rather than through the iterative optimization of the stress objective function. The question of kernel choice is also discussed. 1 Introduction Suppose we are given n objects, and for each pair (i,j) we have a measurement of the "dissimilarity" Oij between the two objects. In multidimensional scaling (MDS) the aim is to place n points in a low dimensional space (usually Euclidean) so that the interpoint distances dij have a particular relationship to the original dissimilarities. In classical scaling we would like the interpoint distances to be equal to the dissimilarities. For example, classical scaling can be used to reconstruct a map of the locations of some cities given the distances between them. In metric MDS the relationship is of the form dij ~ f(Oij) where f is a specific function. In this paper we show that the kernel peA algorithm of Sch6lkopf et al [7] can be interpreted as performing metric MDS if the kernel function is isotropic. This is achieved by performing classical scaling in the feature space defined by the kernel. The structure of the remainder of this paper is as follows: In section 2 classical and metric MDS are reviewed, and in section 3 the kernel peA algorithm is described. The link between the two methods is made in section 4. Section 5 describes approaches to choosing the kernel function, and we finish with a brief discussion in section 6. 2 Classical and metric MDS 2.1 Classical scaling Given n objects and the corresponding dissimilarity matrix, classical scaling is an algebraic method for finding a set of points in space so that the dissimilarities are well-approximated by the interpoint distances. The classical scaling algorithm is introduced below by starting with the locations of n points, constructing a dissimilarity matrix based on their Euclidean distances, and then showing how the configuration of the points can be reconstructed (as far as possible) from the dissimilarity matrix. Let the coordinates of n points in p dimensions be denoted by Xi, i = 1, ... ,n. These can be collected together in a n x p matrix X . The dissimilarities are calculated by 8;j = (Xi Xj)T(Xi Xj). Given these dissimilarities, we construct the matrix A such that aij = -! 8;j' and then set B = H AH, where H is the centering matrix H = In ~l1T . With 8;j = (Xi Xj)T(Xi - Xj), the construction of B leads to bij = (Xi - xF(xj - x), where x = ~ L~=l Xi. In matrix form we have B = (HX)(HX)T, and B is real, symmetric and positive semi-definite. Let the eigendecomposition of B be B = V A V T, where A is a diagonal matrix and V is a matrix whose columns are the eigenvectors of B. If p < n, there will be n - p zero eigenvaluesl . If the eigenvalues are ordered Al ~ A2 ~ ... ~ An ~ 0, then B = VpApVpT, where Ap = diag(Al, ... ,Ap) and Vp is the n x p matrix whose columns correspond to the first p eigenvectors of B, with the usual normalization so that the eigenvectors have unit length. The matrix X of the reconstructed coordinates "" 1 '" ..... of the points can be obtained as X = VpAJ, with B = X XT. Clearly from the information in the dissimilarities one can only recover the original coordinates up to a translation, a rotation and reflections of the axes; the solution obtained for X is such that the origin is at the mean of the n points, and that the axes chosen by the procedure are the principal axes of the X configuration. It may not be necessary to uses all p dimensions to obtain a reasonable approximation; a configuration X in k-dimensions can be obtained by using the largest k A ! eigenvalues so that X = VkA~ . These are known as the principal coordinates of X in k dimensions. The fraction of the variance explained by the first k eigenvalues is L~=l Ad L~=l Ai· Classical scaling as explained above works on Euclidean distances as the dissimilarities. However, one can run the same algorithm with a non-Euclidean dissimilarity matrix, although in this case there is no guarantee that the eigenvalues will be non-negative. Classical scaling derives from the work of Schoenberg and Young and Householder in the 1930's. Expositions of the theory can be found in [5] and [2]. 2.1.1 Opthnality properties of classical scaling Mardia et al [5] (section 14.4) give the following optimality property ofthe classical scaling solution. 1 In fact if the points are not in "general position" the number of zero eigenvalues will be greater than n - p. Below we assume that the points are in general position, although the arguments can easily be carried through with minor modifications if this is not the case. Theorem 1 Let X denote a configuration of points in ffi.P , with interpoint distances c5ri = (Xi - Xi)T (Xi - Xi). Let L be a p x p rotation matrix and set L = (L1' L2), where L1 is p x k for k < p. Let X = X L1, the projection of X onto a k-dimensional subspace of ffi.P , and let dri = (Xi - Xi) T (Xi - Xi). Amongst all projections X = X L1, the quantity ¢ = Li,i (c5ri - dri) is minimized when X is projected onto its principal coordinates in k dimensions. For all i, j we have dii :::; c5ii . The value of ¢ for the principal coordinate projection is ¢ = 2n(Ak+1 + ... + Ap). 2.2 Relationships between classical scaling and peA There is a well-known relationship between PCA and classical scaling; see e.g. Cox and Cox (1994) section 2.2.7. Principal components analysis (PCA) is concerned with the eigendecomposition of the sample covariance matrix S = ~ XT H X. It is easy to show that the eigenvalues of nS are the p non-zero eigenvalues of B. To see this note that H2 = Hand thus that nS = (HX)T(HX). Let Vi be a unit-length eigenvector of B so that BVi = AiVi. Premultiplying by (HX)T yields (HX)T(HX)(HXf V i = Ai(Hx)T Vi (1) so we see that Ai is an eigenvalue of nS. Yi = (H X)T Vi is the corresponding eigenvector; note that Y; Yi = Ai. Centering X and projecting onto the unit vector Yi = X;1/2Yi we obtain HXYi = X;1/2 HX(HXf Vi = AY2 v i . (2) Thus we see that the projection of X onto the eigenvectors of nS returns the classical scaling solution. 2.3 Metric MDS The aim of classical scaling is to find a configuration of points X so that the interpoint distances dii well approximate the dissimilarities c5ii . In metric MDS this criterion is relaxed, so that instead we require (3) where f is a specified (analytic) function. For this definition see, e.g. Kruskal and Wish [4] (page 22), where polynomial transformations are suggested. A straightforward way to carry out metric MDS is to define a error function (or stress) (4) where the {wii} are appropriately chosen weights. One can then obtain derivatives of S with respect to the coordinates of the points that define the dii'S and use gradient-based (or more sophisticated methods) to minimize the stress. This method is known as least-squares scaling. An early reference to this kind of method is Sammon (1969) [6], where wii = 1/c5ii and f is the identity function. Note that if f(c5ii ) has some adjustable parameters () and is linear with respect to () 2, then the function f can also be adapted and the optimal value for those parameters given the current dij's can be obtained by (weighted) least-squares regression. 2 f can still be a non-linear function of its argument. Critchley (1978) [3] (also mentioned in section 2.4.2 of Cox and Cox) carried out metric MDS by running the classical scaling algorithm on the transformed dissimilarities. Critchley suggests the power transformation f(oij) = 00 (for J.L > 0). If the dissimilarities are derived from Euclidean distances, we note that the kernel k(x,y) = -llx-ylli3 is conditionally positive definite (CPD) if f3::; 2 [1]. When the kernel is CPD, the centered matrix will be positive definite. Critchley's use of the classical scaling algorithm is similar to the algorithm discussed below, but crucially the kernel PCA method ensures that the matrix B derived form the transformed dissimilarities is non-negative definite, while this is not guaranteed by Critchley's transformation for arbitrary J.L. A further member of the MDS family is nonmetric MDS (NMDS), also known as ordinal scaling. Here it is only the relative rank ordering between the d's and the o's that is taken to be important; this constraint can be imposed by demanding that the function f in equation 3 is monotonic. This constraint makes sense for some kinds of dissimilarity data (e.g. from psychology) where only the rank orderings have real meaning. 3 Kernel PCA In recent years there has been an explosion of work on kernel methods. For supervised learning these include support vector machines [8], Gaussian process prediction (see, e.g. [10]) and spline methods [9]. The basic idea of these methods is to use the "kernel trick". A point x in the original space is re-represented as a point ¢(x) in a Np-dimensional feature space3 F, where ¢(x) = (¢1(X),¢2(X), ... ,¢NF(X)). We can think of each function ¢j(-) as a non-linear mapping. The key to the kernel trick is to realize that for many algorithms, the only quantities required are of the form4 ¢(Xi).¢(Xj) and thus if these can be easily computed by a non-linear function k(Xi,Xj) = ¢(Xi).¢(Xj) we can save much time and effort. Sch6lkopf, Smola and Miiller [7] used this trick to define kernel peA. One could compute the covariance matrix in the feature space and then calculate its eigenvectors/eigenvalues. However, using the relationship between B and the sample covariance matrix S described above, we can instead consider the n x n matrix K with entries Kij = k(Xi,Xj) for i,j = 1, .. . ,no If Np > n using K will be more efficient than working with the covariance matrix in feature space and anyway the latter would be singular. The data should be centered in the feature space so that L~=l ¢(Xi) = o. This is achieved by carrying out the eigendecomposition of K = H K H which gives the coordinates of the approximating points as described in section 2.2. Thus we see that the visualization of data by projecting it onto the first k eigenvectors is exactly classical scaling in feature space. 4 A relationship between kernel PCA and metric MDS We consider two cases. In section 4.1 we deal with the case that the kernel is isotropic and obtain a close relationship between kernel PCA and metric MDS. If the kernel is non-stationary a rather less close relationship is derived in section 4.2. 3For some kernels NF = 00. 4We denote the inner product of two vectors as either a .h or aTh. 4.1 Isotropic kernels A kernelfunction is stationary if k(Xi' Xj) depends only on the vector T = Xi -Xj. A stationary covariance function is isotropic if k(Xi,Xj) depends only on the distance 8ij with 8;j = T.T, so that we write k(Xi,Xj) = r(8ij ). Assume that the kernel is scaled so that r(O) = 1. An example of an isotropic kernel is the squared exponential or REF (radial basis function) kernel k(Xi' Xj) = exp{ -O(Xi - Xj)T(Xi - Xj)}, for some parameter 0 > O. Consider the Euclidean distance in feature space 8;j = (¢(Xi) - ¢(Xj))T(¢(Xi) ¢(Xj)). With an isotropic kernel this can be re-expressed as 8;j = 2(1 - r(8ij )). Thus the matrix A has elements aij = r(8ij ) - 1, which can be written as A = K - 11 T. It can be easily verified that the centering matrix H annihilates 11 T, so that HAH = HKH. We see that the configuration of points derived from performing classical scaling on K actually aims to approximate the feature-space distances computed as 8ij = J2(1- r(8ij )). As the 8ij's are a non-linear function of the 8ij's this procedure (kernel MDS) is an example of metric MDS. Remark 1 Kernel functions are usually chosen to be conditionally positive definite, so that the eigenvalues of the matrix k will be non-negative. Choosing arbitrary functions to transform the dissimilarities will not give this guarantee. Remark 2 In nonmetric MDS we require that dij ~ f(8ij ) for some monotonic function f. If the kernel function r is monotonically decreasing then clearly 1 - r is monotonically increasing. However, there are valid isotropic kernel (covariance) functions which are non-monotonic (e.g. the exponentially damped cosine r(8) = coo cos(w8); see [11] for details) and thus we see that f need not be monotonic in kernel MDS. Remark 3 One advantage of PCA is that it defines a mapping from the original space to the principal coordinates, and hence that if a new point x arrives, its projection onto the principal coordinates defined by the original n data points can be computed5 . The same property holds in kernel PCA, so that the computation of the projection of ¢(x) onto the rth principal direction in feature space can be computed using the kernel trick as L:~=1 o:i k(x, Xi), where or is the rth eigenvector of k (see equation 4.1 in [7]). This projection property does not hold for algorithms that simply minimize the stress objective function; for example the Sammon "mapping" algorithm [6] does not in fact define a mapping. 4.2 Non-stationary kernels Sometimes non-stationary kernels (e.g. k(Xi,Xj) = (1 + Xi.Xj)m for integer m) are used. For non-stationary kernels we proceed as before and construct 8;j = (¢(Xi)-¢(Xj))T(¢(Xi)-¢(Xj)). We can again show that the kernel MDS procedure operates on the matrix H K H. However, the distance 8ij in feature space is not a function of 8ij and so the relationship of equation 3 does not hold. The situation can be saved somewhat if we follow Mardia et al (section 14.2.3) and relate similarities 5Note that this will be, in general, different to the solution found by doing peA on the full data set of n + 1 points. .#, .. - .. -"--:;:-:;: - - - ::: ......•... ~(-:::/ .... / "", I .,' I ... '.:/ 500 1000 1500 k I bola_O , - , bela=4 -- bela =10 '''' bela-20 2000 2500 Figure 1: The plot shows 'Y as a function of k for various values of (3 = () /256 for the USPS test set. to dissimilarities through Jlj = Cii + Cjj 2Cij, where Cij denotes the similarity between items i and j in feature space. Then we see that the similarity in feature space is given by Cij = ¢(Xi).¢(Xj) = k(Xi' Xj). For kernels (such as polynomial kernels) that are functions of Xi.Xj (the similarity in input space), we see then that the similarity in feature space is a non-linear function of the similarity measured in input space. 5 Choice of kernel Having performed kernel MDS one can plot the scatter diagram (or Shepard diagram) of the dissimilarities against the fitted distances. We know that for each pair the fitted distance d ij ::; Jij because of the projection property in feature space. The sum of the residuals is given by 2n E~=k+l Ai where the {Ai} are the eigenvalues of k = H K H. (See Theorem 1 above and recall that at most n of the eigenvalues of the covariance matrix in feature space will be non-zero.) Hence the fraction of the sum-squared distance explained by the first k dimensions is 'Y = E:=1 Ad E~=1 Ai. One idea for choosing the kernel would be to fix the dimensionality k and choose r(·) so that 'Y is maximized. Consider the effect of varying () in the RBF kernel k(Xi, Xj) =exp{-()(xi-xjf(Xi-Xj)}. (5) As () -+ 00 we have Jlj = 2(1- c5(i,j)) (where c5(i,j) is the Kronecker delta), which are the distances corresponding to a regular simplex. Thus K -+ In, H K H = H and'Y = k/(n -1). Letting () -+ 0 and using e-oz ~ 1- ()z for small (), we can show that Kij = 1 - ()c5lj as () -+ 0, and thus that the classical scaling solution is obtained in this limit. Experiments have been run on the US Postal Service database of handwritten digits, as used in [7]. The test set of 2007 images was used. The size of each image is 16 x 16 pixels, with the intensity of the pixels scaled so that the average variance over all 256 dimensions is 0.5. In Figure 1 'Y is plotted against k for various values of (3 = () /256. By choosing an index k one can observe from Figure 1 what fraction of the variance is explained by the first k eigenvalues. The trend is that as () decreases more and more variance is explained by fewer components, which fits in with the idea above that the () -t 00 limit gives rise to the regular simplex case. Thus there does not seem to be a non-trivial value of () which minimizes the residuals. 6 Discussion The results above show that kernel PCA using an isotropic kernel function can be interpreted as performing a kind of metric MDS. The main difference between the kernel MDS algorithm and other metric MDS algorithms is that kernel MDS uses the classical scaling solution in feature space. The advantage of the classical scaling solution is that it is computed from an eigenproblem, and avoids the iterative optimization of the stress objective function that is used for most other MDS solutions. The classical scaling solution is unique up to the unavoidable translation, rotation and reflection symmetries (assuming that there are no repeated eigenvalues). Critchley's work (1978) is somewhat similar to kernel MDS, but it lacks the notion of a projection into feature space and does not always ensure that the matrix B is non-negative definite. We have also looked at the question of adapting the kernel so as to minimize the sum of the residuals. However, for the case investigated this leads to a trivial solution. Acknowledgements I thank David Willshaw, Matthias Seeger and Amos Storkey for helpful conversations, and the anonymous referees whose comments have helped improve the paper. References [1] C. Berg, J. P. R. Christensen, and P. Ressel. Harmonic Analysis on Semigroups. Springer-Verlag, New York, 1984. [2] T. F. Cox and M. A. A. Cox. Multidimensional Scaling. Chapman and Hall, London, 1994. [3] F. Critchley. Multidimensionsal scaling: a short critique and a new method. In L. C. A Corsten and J. Hermans, editors, COMPSTAT 1978. Physica-Verlag, Vienna, 1978. [4] J. B. Kruskal and M. Wish. Multidimensional Scaling. Sage Publications, Beverly Hills, 1978. [5] Mardia, K V. and Kent, J. T. and Bibby, J. M. Multivariate Analysis. Academic Press, 1979. [6] J. W. Sammon. A nonlinear mapping for data structure analysis. IEEE Trans. on Computers, 18:401-409, 1969. [7] B. Scholkopf, A. Smola, and K-R. Muller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299- 1319, 1998. [8] V. N. Vapnik. The nature of statistical learning theory. Springer Verlag, New York, 1995. [9] G. Wahba. Spline models for observational data. Society for Industrial and Applied Mathematics, Philadelphia, PA, 1990. CBMS-NSF Regional Conference series in applied mathematics. [10] C. K I. Williams and D. Barber. Bayesian classification with Gaussian processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(12):1342- 1351, 1998. [11] A. M. Yaglom. Correlation Theory of Stationary and Related Random Functions Volume I:Basic Results. Springer Verlag, 1987.
|
2000
|
115
|
1,771
|
One Microphone Source Separation Sam T. Roweis Gatsby Unit, University College London roweis@gatsby.ucl. a c.uk Abstract Source separation, or computational auditory scene analysis, attempts to extract individual acoustic objects from input which contains a mixture of sounds from different sources, altered by the acoustic environment. Unmixing algorithms such as lCA and its extensions recover sources by reweighting multiple observation sequences, and thus cannot operate when only a single observation signal is available. I present a technique called refiltering which recovers sources by a nonstationary reweighting ("masking") of frequency sub-bands from a single recording, and argue for the application of statistical algorithms to learning this masking function. I present results of a simple factorial HMM system which learns on recordings of single speakers and can then separate mixtures using only one observation signal by computing the masking function and then refiltering. 1 Learning from data in computational auditory scene analysis Imagine listening to many pianos being played simultaneously. If each pianist were striking keys randomly it would be very difficult to tell which note came from which piano. But if each were playing a coherent song, separation would be much easier because of the structure of music. Now imagine teaching a computer to do the separation by showing it many musical scores as "training data". Typical auditory perceptual input contains a mixture of sounds from different sources, altered by the acoustic environment. Any biological or artificial hearing system must extract individual acoustic objects or streams in order to do successful localization, denoising and recognition. Bregman [1] called this process auditory scene analysis in analogy to vision. Source separation, or computational auditory scene analysis (CASA) is the practical realization of this problem via computer analysis of microphone recordings and is very similar to the musical task described above. It has been investigated by research groups with different emphases. The CASA community have focused on both multiple and single microphone source separation problems under highly realistic acoustic conditions, but have used almost exclusively hand designed systems which include substantial knowledge of the human auditory system and its psychophysical characteristics (e.g. [2,3]). Unfortunately, it is difficult to incorporate large amounts of detailed statistical knowledge about the problem into such an approach. On the other hand, machine learning researchers, especially those working on independent components analysis (lCA) and related algorithms, have focused on the case of multiple microphones in simplified mixing environments and have used powerful "blind" statistical techniques. These "unmixing" algorithms (even those which attempt to recover more sources than signals) cannot operate on single recordings. Furthermore, since they often depend only on the joint amplitude histogram of the observations they can be very sensitive to the details of filtering and reverberation in the environment. The goal of this paper is to bring together the robust representations of CAS A and methods which learn from data to solve a restricted version of the source separation problem - isolating acoustic objects from only a single microphone recording. 2 Refiltering vs. unmixing Unmixing algorithms reweight multiple simultaneous recordings mk (t) (generically called microphones) to form a new source object s(t): s(t) = D:lml(t)+D:2m2(t)+ ... +D:KmK(t) (1) '-v-" '-v-'" '-v-'" ............... estimated source mic 1 mic 2 mic K The unmixing coefficients D:i are constant over time and are chosen to optimize some property of the set of recovered sources, which often translates into a kurtosis measure on the joint amplitude histogram of the microphones. The intuition is that unmixing algorithms are finding spikes (or dents for low kurtosis sources) in the marginal amplitude histogram. The time ordering of the datapoints is often irrelevant. Unmixing depends on a fine timescale, sample-by-sample comparison of several observation signals. Humans, on the other hand, cannot hear histogram spikes l and perform well on many monaural separation tasks. We are doing structural analysis, or a kind of perceptual grouping on the incoming sound. But what is being grouped? There is substantial evidence that the energy across time in different frequency bands can carry relatively independent information. This suggests that the appropriate subparts of an audio signal may be narrow frequency bands over short times. To generate these parts, one can perform multi band analysis - break the original signal y(t) into many subband signals bi(t) each filtered to contain only energy from a small portion of the spectrum. The results of such an analysis are often displayed as a spectrogram which shows energy (using colour or grayscale) as a function of time (ordinate) and frequency (abscissa). (For example one is shown on the top left of figure 5.) In the musical analogy, a spectrogram is like a musical score in which the colour or grey level of the each note tells you how hard to hit the piano key. The basic idea of refiltering is to construct new sources by selectively reweighting the multiband signals bi(t). Crucially, however, the mixing coefficients are no longer constant over time; they are now called masking signals. Given a set of masking signals, denoted D:i(t), a source s(t) can be recovered by modulating the corresponding subband signals from the original input and summing: s(t) '-v-" estimated source mask 1 ,-.-.. D:l(t) b1(t) ~ sub-band 1 mask 2 ,-.-.. + D:2(t) b2(t) ~ sub-band 2 maskK ,.--.... + ... + D:K(t) bK(t) '-v-" sub-band K (2) The D:i(t) are gain knobs on each subband that we can twist over time to bring bands in and out of the source as needed. This performs masking on the original spectrogram. (An equivalent operation can be performed in the frequency domain.2) This approach, illustrated in figure 1, forms the basis of many CASA approaches (e.g. [2,3,4]). For any specific choice of masking signals D:i(t), refiltering attempts to isolate a single source from the input signal and suppress all other sources and background noises. Different sources can be isolated by choosing different masking signals. Henceforth, I will make a strong simplifying assumption that D:i(t) are binary and constant over a timescale T of roughly 30ms. This is physically unrealistic, because the energy in each small region of time-frequency never comes entirely from a single source. However in practice, for small numbers of sources, this approximation works quite well (figure 3). (Think of ignoring collisions by assuming separate piano players do not often hit the same note at the same time.) lTry randomJy permuting the time order of samples in a stereo mixture containing several sources and see if you still hear distinct streams when you play it back. 2Make a conventional spectrogram of the original signal y(t) and modulate the magnitude of each short time DFT while preserving its phase: SW(T) = F- 1 {D:wIIF{yW(r)}IILF{yW(r)}} where sW(r) and yW(r) are the wth windows (blocks) of the recovered and original signals, oi is the masking signal for subband i in window w, and F[·] is the DFf. Figure 1: The refiltering approach to one microphone source separation. Multiband analysis of the original signal y(t) gives sub-band signals bi(t) which are modulated by masking signals ai(t) (binary or real valued between 0 and 1) and recombined to give the estimated source or object s(t). Refiltering can also be thought of as a highly nonstationary Wiener filter in which both the signal and noise spectra are re-estimated at a rate l/T; the binary assumption is equivalent to assuming that over a timescale T the signal and noise spectra are nonoverlapping. It is a fortunate empirical fact that refiltering, even with binary masking signals, can cleanly separate sources from a single mixed recording. This can be demonstrated by taking several isolated sources or noises and mixing them in a controlled way. Since the original components are known, an "optimal" set of masking signals can be computed. For example, we might set 0i ( t) equal to the ratio of energy from one source in band i around times t ± T to the sum of energies from all sources in the same band at that time (as recommended by the Wiener filter) or to a binary version which thresholds this ratio. Constructing masks in this way is also useful for generating labeled training data, as discussed below. 3 Multiband grouping as a statistical pattern recognition problem Since one-microphone source separation using refiltering is possible if the masking signals are well chosen, the essential problem becomes: how can the Oi(t) be computed automatically from a single mixed recording? The goal is to group or "tag" together regions of the spectrogram that belong to the same auditory object. Fortunately, in audition (as in vision), natural signals-especially speech---exhibit a lot of regularity in the way energy is distributed across the time-frequency plane. Grouping cues based on these regularities have been studied for many years by psychophysicists and are hand built into many CASA systems. Cues are based on the idea of suspicious coincidences: roughly, "things that move together likely belong together". Thus, frequencies which exhibit common onsets, offsets, or upward/downward sweeps are more likely to be grouped into the same stream (figure 2). Also, many real world sounds have harmonic spectra; so frequencies which lie exactly on a harmonic "stack" are often perceptually grouped together. (Musically, piano players do not hit keys randomly, but instead use chords and repeated melodies.) Harmonic stacking. Common onset. Frequency co-modulation. Figure 2: Examples of three common grouping cues for energy which often comes from a single source. (left) Frequencies which lie exactly on harmonic multiples of a single base frequency. (middle) Frequencies which suddenly increase or decrease their energy together. (right) Energy which which moves up or down in frequency at the same time. There are several ways that statistical pattern recognition might be applied to take advantage of these cues. Methods may be roughly grouped into unsupervised ones, which learn models of isolated sources and then try to explain mixed input as being caused by the interaction of individual source models; and supervised methods, which explicitly model grouping in mixed acoustic input but require labeled data consisting of mixed input as well as masking signals. Luckily it is very easy to generate such data by mixing isolated sources in a controlled way, although the subsequent supervised learning can difficult.3 Figure 3: Each point represents the energy from one source versus another in a narrow frequency band over a 32ms window. The plot shows all frequencies over a 2 second period from a speech mixture. Typically when one source has large energy the other does not. The binary assumption on the masking signals O!i(t) is equivalent to projecting the points shown onto either the horizontal or vertical axis. 4 Results using factorial-max HMMs Here, I will describe one (purely unsupervised) method I have pursued for automatically generating masking signals from a single microphone. The approach first trains speaker dependent hidden Markov models (HMMs) on isolated data from single talkers. These pre-trained models are then combined in a particular way to build a separation system. First, for each speaker, a simple HMM is fit using patches of narrowband spectrograms as the pattern vectors.4 The emission densities model the typical spectral patterns produced by each talker, while the transition probabilities encourage spectral continuity. HMM training was initialized by first training a mixture of Gaussians on each speaker's data (with a single shared covariance matrix) independent of time order. Each mixture had 8192 components of dimension 1026 = 513 x 2; thus each HMM had 8192 states. To avoid overfitting, the transition matrices were regularlized after training so that each transition (even those unobserved in the training set) had a small finite probability. Next, to separate a new single recording which is a mixture of known speakers, these pretrained models are combined into afactorial hidden Markov model (FHMM) architecture [5]. A FHMM consists of two or more underlying Markov chains (the hidden states) which evolve independently. The observation Yt at any time depends on the states of all the chains. A simple way to model this dependence is to have each chain c independently propose an output yC and then combine them to generate the observation according to some rule Yt = Q(yi, yl, ... ,yD· Below, I use a model with only two chains, whose states are denoted Xt and Zt. At each time, one chain proposes an output vector ax, and the other proposes hz,. The key part of the model is the function Q: observations are generated by taking the elementwise maximum of the proposals and adding noise. This maximum operation reflects the observation that the log magnitude spectrogram of a mixture of sources is very nearly the elementwise maximum of the individual spectrograms. The full generative model for this "factorial-max HMM" can be written simply as: p(Xt = jlXt-l = i) = Tij p(Zt = jlZt-l = i) = U ij p(Yt IXt, Zt) = N(max[axt! hz,], R) (3) (4) (5) 3Recall that refiltering can only isolate one auditory stream at a time from the scene (we are always separating "a source" from "the background"). This makes learning the masking signals an unusual problem because for any input (spectrogram) there are as many correct answers as objects in the scene. Such a highly multimodal distribution on outputs given inputs means that the mapping from auditory input to masking signals cannot be learned using backprop or other single-valued function approximators which take the average of the possible maskings present in the training data. 4The observations are created by concatenating the values of 2 adjacent columns of the log magnitude periodogram into a single vector. The original waveforms were sampled at 16kHz. Periodogram windows of 32ms at a frame rate of 16ms were analyzed using a Hamming tapered OFT zero padded to length 1024. This gave 513 frequency samples from OC to Nyquist. Average signal energy was normalized across the most recent 8 frames before computing each OFT. where N(f.-L, 1;) denotes a Gaussian distribution with mean f.-L and covariance 1; and max[·] is the elementwise maximum operation on two vectors. (There are also densities on the initial states Xl and zd This model is illustrated in figure 4. It ignores two aspects of the spectrogram data: first, Gaussian noise is used although the observations are nonnegative; second, the probability factor requiring the non-maximum output proposal to be less than the maximum proposal is missing. However, in practice these approximations are not too severe and making them allows an efficient inference procedure (see below) . ••• ••• Figure 4: Factorial HMM with max output semantics. Two Markov chains Xt and Zt evolve independently. Observations Yt are the elementwise max of the individual emission vectors max[ax " b z,] plus Gaussian noise. In the experiment presented below, each chain represents a speaker dependent HMM (one male and one female). The emission and transition probabilities from each speaker's pretrained HMM were used as the parameters for the combined FHMM. (The output noise covariance R is shared between the two HMMs.) Given an input waveform, the observation sequence Y = YI, ... ,YT is created from the spectrogram as before.4 Separation is done by first inferring a joint underlying state sequence {Xt, Zt} of the two Markov chains in the model and then using the difference of their individual output predictions to compute a binary masking signal: Clt(i) = 1 if a~, (i) > hz, (i) and 0 if a~, (i) ~ hz, (i) (6) Ideally, the inferred state sequences {Xt, Zt} should be the mode of the posterior distribution p(Xt, ztIY). Since the hidden chains share a single visible output variable, naive inference in the FHMM graphical model yields an intractable amount of work exponential in the size of the state space of each submodel. However, because all of the observations are nonnegative and the max operation is used to combine output proposals, there is an efficient trick for computing the best joint state trajectory. At each time, we can upper bound the log-probability of generating the observation vector if one chain is in state i, no matter what state the other chain is in. Computing these bounds for each state setting of each chain requires only a linear amount of work in the size of the state spaces. With these bounds in hand, each time we evaluate the probability of a specific pair of states we can eliminate from consideration all state settings of either chain whose bounds are worse than the achieved probability. If pairs of states are evaluated in a sensible heuristic order (for example by ranking the bounds) this results in practice in almost all possible configurations being quickly eliminated. (This trick turns out to be equivalent to Clj3 search in game trees.) The training data for the model consists only of spectrograms of isolated examples of each speaker but inference can be done on test data which is a spectrogram of a single mixture of known speakers. The results of separating a simple two speaker mixture are shown below. The test utterance was formed by linearly mixing two out-of-sample utterances (one male and one female) from the same speakers as the models were trained on. Figure 5 shows the original mixed spectrogram (top left) as well as the sequence of outputs a~, (bottom left) and hz, (bottom right) from each chain. The chain with the maximum output in any sub-band at any time has Cli(t) = 1, otherwise Cli(t) = 0 (top right). The FHMM system achieves good separation from only a single microphone (see figure 6). < ? > hz, Figure 5: (top left) Original spectrogram of mixed utterance. (bottom) Male and female spectrograms predicted by factorial HMM and used to compute refiltering masks. (top right) Masking signals Oi (t), computed by comparing the magnitudes of each model's predictions. 5 Conclusions In this paper I have argued for the marriage of learning algorithms with the refiltering approach to CASA. I have presented results from a simple factorial HMM system on a speaker dependent separation problem which indicate that automatically learned onemicrophone separation systems may be possible. In the machine learning community, the one-microphone separation problem has received much less attention than unmixing problems, while CASA researchers have not employed automatic learning techniques to full effect. Scene analysis is an interesting and challenging learning problem with exciting and practical applications, and the refiltering setup has many nice properties. First, it can work if the masking signals are chosen properly. Second, it is easy to generate lots of training data, both supervised and unsupervised. Third, a good learning algorithmwhen presented with enough data-should automatically discover the sorts of grouping cues which have been built into existing systems by hand. Furthermore, in the refiltering paradigm there is no need to make a hard decision about the number of sources present in an input. Each proposed masking has an associated score or probability; groupings with high scores can be considered "sources", while ones with low scores might be parts of the background or mixtures other faint sources. CAS A returns a collection of candidate maskings and their associated scores, and then it is up to the user to decide-based on the range of scores-the number of sources in the scene. Many existing approaches to speech and audio processing have the potential to be applied to the monaural source separation problem. The unsupervised factorial HMM system presented in this paper is very similar to the work in the speech recognition community on parallel model combination [6,7]; however rather than using the combined models to evaluate the likelihood of speech in noise, the efficiently inferred states are being used to generate a masking signal for refiltering. Wan and Nelson have developed dual EKF methods [8] and applied them speech denoising but have also informally demonstrated their potential application to monaural source separation. Attias and colleagues [9] developed a fully probabilistic model of speech in noise and used variational Bayesian techniques to perform inference and learning allowing denoising and dereverberation; their approach clearly has the potential to be applied to the separation problem as well. Cauwenberghs [10] has a very promising approach to the problem for purely harmonic signals that takes advantage of powerful phase constraints which are ignored by other algorithms. Unsupervised and supervised approaches can be combined to various degrees. Learning models of isolated sounds may be useful for developing feature detectors; conjunctions of such feature detectors can then be trained in a supervised fashion using labeled data. Figure 6: Test separation results, using a 2-chain speaker dependent factorial-max HMM, followed by refiltering. (See figure 4 and text for details.) (A) Original waveform of mixed utterance. (B) Original isolated male & female waveforms. (C) Estimated male and female waveforms. The oscillatory correlation algorithm of Brown and Wang [4] has a low level module to detect features in the correlogram and a high level module to do grouping. Related ideas in machine vision, such as Markov networks [11] and minimum normalized cut [12] use low level operations to define weights between pixels and then higher level computations to group pixels together. Acknowledgements Thanks to Hagai Attias, Guy Brown, Geoff Hinton and Lawrence Saul for many insightful discussions about the CASA problem, and to three anonymous referees and many visitors to my poster for helpful comments, criticisms and references to work I had overlooked. References [1] AS. Bregman. (1994) Auditory Scene Analysis. MIT Press. [2] G. Brown & M. Cooke. (1994) Computational auditory scene analysis. Computer Speech and Language 8. [3] D. Ellis. (1994) A computer implementation of psychoacoustic grouping rules. Proc. 12th IntI. Conf. on Pattern Recognition, Jerusalem. [4] G. Brown & D.L. Wang. (2000) An oscillatory correlationframeworkfor computational auditory scene analysis. NIPS 12. [5] Z. Ghalu'amani & M.l. Jordan (1997) Factorial hidden Markov models, Machine Learning 29. [6] AP. Varga & R.K. Moore (1990) Hidden Markov model decomposition of speech and noise, IEEE Conf. Acoustics, Speech & Signal Processing (ICASSP'90). [7] M.J.F. Gales & SJ. Young (1996) Robust continuous speech recognition using parallel model combination, IEEE Trans. Speech & Audio Processing 4. [8] E.A. Wan & A.T. Nelson (1998) Removal of noise from speech using the dual EKF algorithm, IEEE Conf. Acoustics, Speech & Signal Processing (ICASSP'98). [9] H. Attias, J.C. Platt & A. Acero (2001) Speech denoising and dereverberation using probabilistic models, this volume. [10] G. Cauwenberghs (1999) Monaural separation of independent acoustical components, IEEE Symp. Circuit & Systems (ISCAS'99). [11] W. Freeman & E. Pasztor. (1999) Markov networks for low-level vision. Mitsubishi Electric Research Laboratory Technical Report TR99-08. [12] J. Shi & J. Malik. (1997) Normalized cuts and image segmentation. IEEE Conf. Computer Vision and Pattern Recognition, Puerto Rico (ICCV'97).
|
2000
|
116
|
1,772
|
Interactive Parts Model: an Application to Recognition of On-line Cursive Script Predrag Neskovic, Philip C Davis' and Leon N Cooper Physics Department and Institute for Brain and Neural Systems Brown University, Providence, RI 02912 Abstract In this work, we introduce an Interactive Parts (IP) model as an alternative to Hidden Markov Models (HMMs). We tested both models on a database of on-line cursive script. We show that implementations of HMMs and the IP model, in which all letters are assumed to have the same average width, give comparable results. However, in contrast to HMMs, the IP model can handle duration modeling without an increase in computational complexity. 1 Introduction Hidden Markov models [9] have been a dominant paradigm in speech and handwriting recognition over the past several decades. The success of HMMs is primarily due to their ability to model the statistical and sequential nature of speech and handwriting data. However, HMMs have a number of weaknesses [2] . First, discriminative powers of HMMs are weak since the training algorithm is based on a Maximum Likelihood Estimate (MLE) criterion, whereas the optimal training should be based on a Maximum a Posteriori (MAP) criterion [2] . Second, in most HMMs, only first or second order dependencies are assumed. Although explicit duration HMMs model data more accurately, the computational cost of such modeling is high [5]. To overcome the first problem, it has been suggested [1, 11,2] that Neural Networks (NNs) should be used for estimating emission probabilities. Since NNs cannot deal well with sequential data, they are often used in combination with HMMs as hybrid NN/HMM systems [2, 11]. In this work, we introduce a new model that provides a possible solution to the second problem. In addition, this new objective function can be cast into a NNbased framework [7, 8] and can easily deal with the sequential nature of handwriting. In our approach, we model an object as a set of local parts arranged at specific spatial locations. 'Now at MIT Lincoln Laboratory, Lexington, MA 02420-9108 a 0.6 0.3 O.B d 0.3 e 0.3 0.1 0.4 SHAPE DISTORTIONS 0 0.2 0.7 cut o1t~ _________ SPATIAL DISTORTIONS at Cl ct Figure 1: Effect of shape distortion, and Figure 2: Some of the non-zero elements spatial distortions applied on the word of the detection matrix associated with "act" . the word "act". Parts-based representation has been used in face detection systems [3] and has recently been applied to spotting keywords in cursive handwriting data [4]. Although the model proposed in [4] presents a rigorous probabilistic approach, it only models the positions of key-points and, in order to learn the appropriate statistics, it requires many ground-truthed training examples. In this work, we focus on modeling one dimensional objects. In our application, an object is a handwritten word and its parts are the letters. However, the method we propose is quite general and can easily be extended to two dimensional problems. 2 The Objective Function In our approach, we assume that a handwritten pattern is a distorted version of one of the dictionary words. Furthermore, we assume that any distortion of a word can be expressed as a combination of two types of local distortions [6]: a) shape distortions of one or more letters, and b) spatial distortions, also called domain warping, as illustrated in Figure 1. In the latter case, the shape of each letter is unchanged but the location of one or more letters is perturbed. Shape distortions can be captured using "letter detectors". A number of different techniques can be used to construct letter detectors. In our implementation, we use a neural network-based approach. The output of a letter detector is in the range [0 - 1], where 1 corresponds to the undistorted shape of the corresponding letter. Since it is not known, a priori, where the letters are located in the pattern, letter detectors, for each letter of the alphabet, are arranged over the pattern so that the pattern is completely covered by their (overlapping) receptive fields. The outputs of the letter detectors form a detection matrix, Figure 2. Each row of the detection matrix represents one letter and each column corresponds to the position of the letter within the pattern. An element of the detection matrix is labeled as d!:(x), where k denotes the class of the letter, k E [1 , ... ,26], and the x represents the column number. In general, the detection matrix contains a large number of "false alarms" due to the fact that local segments are often ambiguous. The recognition system segments a pattern by selecting one detection matrix element for each letter of a given dictionary word 1. To measure spatial distortions, one must first choose a reference point from which distortions are measured. It is clear that for any choice of reference point, the location estimates for letters that are not close to the reference point might be very poor. For this reason, we chose a representation in which each letter serves as a reference point to estimate the position of every other letter. This representation allows translation invariant recognition, is very robust (since it does not depend on any single reference point) and very accurate (since it includes nearest neighbor reference points). To evaluate the level of distortion of a handwritten pattern from a given dictionary word, we introduce an objective function. The value of this function represents the amount of distortion of the pattern from the dictionary word. We require that the objective function reach a minimal value if all the letters that constitute the dictionary word are detected with the highest confidence and are located at the locations with highest expectation values. Furthermore, we require that the dependence of the function on one of its letters be smaller for longer words. One function with similar properties to these is the energy function of a system of interacting particles, Li,j qiUi,j(Xi, Xj)qj. If we assume that all the letters are of the same size, we can map 1) letter detection estimates into "charge" and 2) choose interaction terms (potentials) to reflect the expected relative positioning of the letters (detection matrix elements). The energy function of the n -th dictionary word, is then Ln En(x) = L di(x;)Ui~j(xi, xj)d'J(xj), (1) i ,j=l,iicj where Ln is the number of letters in the word, Xi is the location of the i - th letter of the n - th dictionary word, and x = (Xl,· .. , XLJ is a particular configuration of detection matrix elements. Although this equation has a similar form as, say, the Coulomb energy, it is much more complicated. The interaction terms Ui,j are more complex than l/r, and each "charge", di(Xi), does not have a fixed value, but depends on its location. Note that this energy is a function of a specific choice of elements from the detection matrix, x, a specific segmentation of the word. Interaction terms can be calculated from training data in a number of different ways. One possibility is to use the EM algorithm [9] and do the training for each dictionary word. Another possibility is to propagate nearest neighbor estimates. Let us denote with the symbol pij (Xi, X j) the (pairwise) probability of finding the j - th letter of the n - th dictionary word at distance X = Xj - Xi from the location of the i - th letter. A simple way to approximate pairwise probabilities is to find the probability distribution of letter widths for each letter and then from single letter distributions calculate nearest neighbor pairwise probabilities. Knowing the nearest neighbor probabilities, it is then easy to propagate them and find the pairwise probabilities between any two letters of any dictionary word [7]. Interaction potentials are related to pairwise probabilities (using the Boltzmann distribution and setting j3 = 1/ kT = 1), as Ui~j(Xi,Xj) = -lnpij(xi,Xj)+C. Since the interaction potentials are defined up to a constant, we can selectively 1 Note that this segmentation corresponds to finding the centers of the letters, as opposed to segmenting a word into letters by finding their boundaries. I b u - c __ ---//// Figure 3: Solid line: an example of a pairwise probability distribution for neighboring letters. Dashed lines: a family of corresponding interaction potentials. Figure 4: Modified interaction potential. Regions x ::::: a and x :::: b are the "forbidden" regions for letter locations. In the regions a < x < a' and b' < x < b the interaction term is zero. change the value of their minima by choosing different values for C, Fig. 3. It is important to stress that the only valid domain for the interaction terms is the region for which Ui,j < 0 since for each pair ofletters (i, j) we want to simultaneously minimize the interaction term Ui,j and to maximize the term di ·dj 2. We will assume that there is a value, Pmin, for the pairwise probability below which the estimate of the letter location is not reliable. So, for every Pij such that 0 < Pij < Pmin, we set Pij = Pmin· We choose the value ofthe constant such that Ui,j = -In(Pmin)+C = 0, Fig. 4. In practice, this means that there is an effective range of influence for each letter, and beyond that range the influence ofthe letter is zero. In the limiting case, one can get a nearest neighbor approximation by appropriately setting Pmin. It is clear that the interaction terms put constraints on the possible locations of the letters of a given dictionary word. They define "allowed" regions, where the letters can be found, unimportant regions, where the influence of a letter on other letters is zero, and not-allowed regions (U = (0), which have zero probability of finding a letter in that region, Fig. 4. The task of recognition can now be formulated as follows. For a given dictionary word, find the configuration of elements from the detection matrix (a specific segmentation ofthe pattern) such that the energy is minimal. Then, in order to find the best dictionary word, repeat the previous procedure for every dictionary word and associate with the pattern the dictionary word with lowest energy. If we denote by X the space of all possible segmentations of the pattern, then the final segmentation of the pattern, x*, is given as x* = argmin~Ex,nEN(En(x)). where the index n runs through the dictionary words. 3 Implementation and an Overview of the System (2) An overview of the system is illustrated in Fig. 5. A raw data file, representing a handwritten word, contains x and y positions of the pen recorded every 10 milliseconds. This input signal is first transformed into strokes, which are defined as lines between points with zero velocity in the y direction. Each stroke is characterized by 2 For Ui,J > 0, increasing di ·dJ would increase, rather than decrease, the energy function. Comparison Between the IP model and HMMs Most Likely Word Most Likely Word t t ~ __ HM_M_S __ ~I ~I ___ IP_m_o_de_l~ t t I--r--r FIJ ' ~ ' 0" .H ..... Wrtter Number Preprocessor t Handwritten Pattern Figure 6: Comparison of recognition resuits on 10 writers using the IP model Figure 5: An overview of the system. and HMMs. a set of features as suggested in [10]. The preprocessor extracts these features from each stroke and supplies them to the neural network. We have built a multi-layer feedforward network based on a weight sharing technique to detect letters. This particular architecture was proposed by Rumelhart [10]. Similar networks can also be found in literature by the name Time Delay Neural Network , (TDNN) [ll]). In our implementation, the network has one hidden layer with thirty rows of hidden units. For details of the network architecture see [10, 7]. The output of the NN, the detection matrix, is then supplied to the HMM-based and IP model-based post-processors, Fig. 5. For both models, we assume that every letter has the same average width. Interaction Terms. The first approximation for interaction terms is to assume a "square well" shape. Each interaction term is then defined with only three parameters, the left boundary a, the right boundary b and the depth of the well, en, which are the same for all the nearest neighbor letters, Fig. 7. The lower and upper limits for the i - th and j - th non-adjacent interaction terms can then be approximated as aij = Ij - il . a and bij = Ij - il . b, respectively. Nearest Neighbor Approximation. Since the exact solution of the energy function given by Eq. (2) is often computationally infeasible (the detection matrices can exceed 40 columns in width for long words), one has to use some approximation technique. One possible solution is suggested in [7], where contextual information is used to constrain the search space. Another possibility is to revise the energy function by considering only nearest neighbor terms and then solve it exactly using a Dynamic Programming (DP) algorithm. We have used DP to find the optimal segmentation for each word . We then use this "optimal" configuration of letters to calculate the energy given by Eq. (1). It is important to mention that we have introduced beginning (B) and end (E) "letters" to mark the beginning and end of the pattern, and their detection probabilities are set to some constant value 3 . Hidden Markov Models. The goal of the recognition system is to find the dictionary word with the maximum posterior probability, p(w IO) = p(Olw)p(w)/p(O), 3This is necessary in order to define interaction potentials for single letter words. u x en a b -f- '------' Figure 7: Square well approximation of the interaction potential. Allowed region is defined as a < x < b, and forbidden regions are x < a, and x > b. HMMs P(d) / \ Expected \~ Figure 8: The probability ofremaining in the same state for exactly d time steps: HMMs (dashed line) vs. expected probability (solid line). given the handwritten pattern, 6. Since p( 0) and p( w) are the same for all dictionary words, maximizing p(wIO) is equivalent to maximizing p(Olw). To find p(Olw), we constructed a left-right (or Bakis) HMM [9] for each dictionary word, ,\", where each letter was represented with one state. Given a dictionary word (a model ,\n), we calculated the maximumlikelihood,p(OI,\n) = Lall Q P(O, QI,\n) = Lall Q P(OIQ, ,\n)p(QI,\n), where the summation is done over all possible state sequences. We used the forward-backward procedure [9] for calculating the previous sum. Emission probabilities were calculated from the detection probabilities using Bayes' rule P(Oxlqk) = dk(x)P(Ox)/ P(qk), where P(qk) denotes the frequency of the k - th letter in the dictionary and the term P(Ox) is the same for all words and can therefore be omitted. Transition probabilities were adjusted until the best recognition results were obtained. Recall that we assumed that all letter widths are the same and therefore the transition probabilities are independent of letter pairs. 4 Results and Discussion Our dataset (obtained from David Rumelhart [10]) consists of words written by 100 different writers, where each writer wrote 1000 words. The size of the dictionary is 1000 words. The neural network was trained on 70 writers (70,000 words) and an independent group of writers was used as a cross validation set. We have tested both the IP model and HMMs on a group of 10 writers (different from the testing and cross-validation groups). The results for each model are depicted in Fig. 6. The IP model chose the correct word 79.89% of the time, while HMMs selected the correct word 79.44% of the time. Although the overall performance of the two models was almost identical, the results differ by several percent on individual writers. This suggests that our model could be used in combination with HMMs (e.g. with some averaging technique) to improve overall recognition. It is important to mention that new dictionary words can be easily added to the dictionary and the IP model does not require retraining on the new words (using the method of calculating interaction terms suggested in this paper). The only information about the new word that has to be supplied to the system is the ordering of the letters. Knowing the nearest neighbor pairwise probabilities, pi} (Xi, X j), it is easy to calculate the location estimates between any two letters of the new word. Furthermore, the IP model can easily recognize words where many of the letters are highly distorted or missing. In standard first-order HMMs with time-independent transition probabilities, the probability of remaining in the i - th state for exactly d time steps is illustrated in Fig. 8. The real probability distribution on letter widths is actually similar to a Poisson distribution [11), Fig. 8. It has been shown that explicit duration HMMs can significantly improve recognition accuracy, but at the expense of a significant increase in computational complexity [5]. Our model, on the other hand, can easily model arbitrarily complex pairwise probabilities without increasing the computational complexity (using DP in a nearest neighbor approximation). We think that this is one of the biggest advantages of our approach over HMMs. We believe that including more precise interaction terms will yield significantly better results (as in HMMs) and this work is currently in progress. Acknowledgments Supported in part by the Office of Naval Research. The authors thank the members of Institute for Brain and Neural Systems for helpful conversations. References [1] Y. Bengio, Y. LeCun, C. Nohl, and C. Burges. Lerec: A NN/HMM hybrid for on-line handwriting recognition. Neural Computation, 7:1289-1303, 1995. [2] H. Bourlard and C. Wellekens. Links between hidden Markov models and multilayer perceptrons. IEEE Transactions on PAMI, 12:1167-1178, 1990. [3] M. Burl, T. Leung, and P. Perona. Recognition of planar object classes. In Proc. IEEE Comput. Soc. Con/. Comput. Vision and Pattern Recogn., 1996. [4] M. Burl and P. Perona. Using hierarchical shape models to spot keywords in cursive handwriting data. In Proc. CVPR 98, 1998. [5] C. Mitchell and L. Jamieson. Modeling duration in a hidden markov model with the exponential family. In Proc. ICASSP, pages 331-334, 1993. [6] D. Mumford. Neuronal archetectures for pattern theoretic problems. In K. C. and D. J. L., editors, Large-Scale Neuronal theories of the Brain, pages 125-152. MIT Press, Cambridge, MA, 1994. [7] P. Neskovic. Feedforward, Feedback Neural Networks With Context Driven Segmentation And Recognition. PhD thesis, Brown University, Physics Dept., May 1999. [8] P. Neskovic and L. Cooper. Neural network-based context driven recognition of on-line cursive script. In 7th IWFHR, 2000. [9] L. Rabiner and B. Juang. An introduction to hidden markov models. ASSP magazine, 3(1):4-16, 1986. [10] D. E. Rumelhart. Theory to practice: A case study - recognizing cursive handwriting. In E. B. Baum, editor, Computational Learning and Cognition: Proceedings of the Third NEC Research Symposium. SIAM, Philadelphia, 1993. [11] M. Schenkel, 1. Guyon, and D. Henderson. On-line cursive script recognition using time delay neural networks and hidden markov models. Machine Vision and Applications, 8:215- 223, 1995.
|
2000
|
117
|
1,773
|
A new model of spatial representations multimodal brain areas. . In Sophie Deneve Department of Brain and cognitive Science University of Rochester Rochester, NY 14620. sdeneve@bcs.rochester.edu Jean-Rene Duhamel Institut des Sciences Cognitives C.N.R.S Bron, France 69675 jrd@isc.cnrs·fr Alexandre Pouget Department of Brain and Cognitive University of Rochester Rochester, NY 14620. alex@bcs.rochester.edu Abstract Most models of spatial representations in the cortex assume cells with limited receptive fields that are defined in a particular egocentric frame of reference. However, cells outside of primary sensory cortex are either gain modulated by postural input or partially shifting. We show that solving classical spatial tasks, like sensory prediction, multi-sensory integration, sensory-motor transformation and motor control requires more complicated intermediate representations that are not invariant in one frame of reference. We present an iterative basis function map that performs these spatial tasks optimally with gain modulated and partially shifting units, and tests it against neurophysiological and neuropsychological data. In order to perform an action directed toward an object, it is necessary to have a representation of its spatial location. The brain must be able to use spatial cues coming from different modalities (e.g. vision, audition, touch, proprioception), combine them to infer the position of the object, and compute the appropriate movement. These cues are in different frames of reference corresponding to different sensory or motor modalities. Visual inputs are primarily encoded in retinotopic maps, auditory inputs are encoded in head centered maps and tactile cues are encoded in skin-centered maps. Going from one frame of reference to the other might seem easy. For example, the head-centered position of an object can be approximated by the sum of its retinotopic position and the eye position. However, positions are represented by population codes in the brain, and computing a head-centered map from a retinotopic map is a more complex computation than the underlying sum. Moreover, as we get closer to sensory-motor areas it seems reasonable to assume Spksls 150 100 50 o Figure 1: Response of a VIP cell to visual stimuli appearing in different part of the screen, for three different eye positions. The level of grey represent the frequency of discharge (In spikes per seconds). The white cross is the fixation point (the head is fixed). The cell's receptive field is moving with the eyes, but only partially. Here the receptive field shift is 60% of the total gaze shift. Moreover this cell is gain modulated by eye position (adapted from Duhamel et al). that the representations should be useful for sensory-motor transformations, rather than encode an "invariant" representation. According to the linear model, space is always represented in the sensory and sensory-motor cortex in one particular egocentric frame of reference. This process is mediated by cells whose receptive fields are anchored to a particular body part. In this view spatial cues coming from different modalities should all be remapped in a common frame of reference at some point, that can be used in turn to compute motor maps (for reaching, grasping, etc ). The linear model was challenged when cells truly invariant in one modality failed to be found in parietal areas. Andersen et al, for example, found retinotopic cells that were gain modulated by eye position in LIP [1], but none of these cells had a headcentered receptive fields. Subsequent studies confirmed that gain-modulation by eye position is a very general phenomena in the cortex, whereas truly head-centered or arm-centered cells have rarely been reported. More recently, in VIP, Duhamel et al. found cells that were neither eye nor headcentered, but whose receptive fields were partially moving with the eyes [2]. As a consequence, the receptive fields appeared to be moving both in the retinotopic and head-centered frames of reference (see figure 1). The amount of shift with gaze varied from cell to cell, and was continuously distributed between 0% (head-centered) and 100% (retinotopic). Partially shifting cells where also found for auditory targets in LIP [5] and in the superior colliculus [3]. We will show in this paper that the nature of the problem of integrating postural and sensory inputs from different modalities, and providing motor outputs with distributed population codes lead us to postulate the existence of these gain modulated and/or partially moving receptive fields in the associative brain areas, instead of invariant representations. We present an interconnected network that can perform multi-directional coordinate and sensory-motor transforms by using intermediate basis function units. These intermediate units are gain modulated by eye position, have partially shifting receptive field and, as a result, represent space in a mixture of frames of reference. They provide a new model of spatial representations in multimodal areas according to which cells responses are not determined solely by the position of the stimulus in a particular egocentric frame of reference, but by the interactions between the dominant input modalities. 1 Sensory predictions and sensory-motor transformations with distributed population codes We will focus on the eye/head system which deals with two frames of reference (retinotopic and head-centered) and one postural input (the eye position). Sensory predictions consist of anticipating a stimulus in one sensory modality from a stimulus originating from the same location, but in another sensory modality. Predictions of auditory stimuli from visual stimuli, for example, requires the computation of a head-centered map from a retinotopic map. 1.1 Coordinate transforms and sensory predictions We assume that the tuned response of a retinotopic cell can be modeled by a Gaussian BT(R - Ri) of the distance between the stimulus position R and the receptive field center Ri , and that the response of a postural cell to eye position can be modeled by a gaussian Be(E - Ej ) of the difference between the eye position E and the preferred angle Ej . In addition we suppose that cells are organized topographically in each layer, so that a stimulus at position r and for eye position 9 will give rise to a hill of activity peaking at position r on the retinotopic map and 9 on the eye position map. We wish to compute a head-centered map where cells responses are described by head-centered gaussian tuning curves Bh(H - Hk) where H is the head-centered position and Hk the preferred position. Given the geometry of the eye/head system, we have approximately H = R + E, but this does not simplify the computation of coordinate transform with population codes. We certainly cannot have Bh(H - Hk) = Be(E - Ej) + BT(R - Rk). 1.2 Basis function map To solve this problem we could use an intermediate neural layer that implements a product between visual and postural tuning curves [4]. Products of Gaussians are basis functions and thus a population of retinotopic cells gain modulated by eye position, whose responses are described by BT(R - Ri)Be(E - Ej) implement a basis function map of Rand E. Any function f(R, E) can be approximated by a linear combination of these cells responses: f(R,E) = L wijBT(R - Ri)Be(E - Ej ). ij (1) In particular, a head centered map is a function of retinotopic position and eye position and can be computed very easily from the basis function map (by a simple linear combination). Even more importantly, any sensory-motor transform can be implemented by feedforward weights coming from the basis function layer. The basis function map itself can be readily implemented from a retinotopic map and an eye position map, by connecting each unit with one visual cell and one eye position cell, and computing a product between these two inputs [4]. Similarly, another basis function map could be implemented by making the product between auditory and postural tuning curves, BT(R - Ri)Bh(H - Hk), in order to predict the position of a visual cue from the sound it makes, or to compute reaching toward auditory cues. However it would be better to combine these two basis function maps in a common architecture, especially if we want to integrate visual and auditory inputs or implement motor feedback to sensory representation, both of which require a multi-directional computation. 2 Multi-directional coordinate transforms with distributed population codes If we want to combine these two basis function maps without giving the priority to one modality, we can intuitively use basis functions that are a product between the three tuning curves: (2) From this intermediate representation, the three sensory maps Br(R- R i ), Be(EEj) and Bh(H - Hi+j) can be computed by simple projections. This ensures that this basis function units can use the two sensory maps as both input and output. We implemented this idea in an interconnected neural network that non-linearly combines visual, auditory and postural inputs in an intermediate layer (the basis function map), which in turn is used to reconstruct the activities on the auditory, visual, and postural layers. This network is completely symmetric, similarly processing visual, postural and auditory inputs. It converges to stable hills of activity on the three neural maps that simultaneously gives the retinotopic position, headcentered position, and the eye position in the input (see figure 2A), performing multi-directional sensory prediction. For this reason, we called this model an iterative basis function network. 3 The iterative basis function network The network is represented on figure 2A. It has four layers: three visible, one dimensional layers (visual, auditory and postural) and a two dimensional hidden layer. The three input layers are not directly connected to one another, but they are all interconnected with the hidden layer. These interconnections are symmetric, i.e. the connection between neuron A and B has the same strength as the connection between neuron Band A. This ensures that the network will converge towards a stable state. We note W r, W h , we the respective weights of the retinotopic, head-centered and eye position layers with the hidden layer. All three weight matrices are circular symmetric gaussian filters. The connection between the ith unit in each input layers, and the hidden layer l, mare wr(i, l, m) = B(l-i), we(i, l, m) = B(m - i), Wh(i, l, m) = B((l + m) - i), where B is a circular gaussian matrix: (3) aw governs the width of the weight, Z is a constant that controls the dominance of the corresponding sensory or postural modality on the intermediate layer, and N is the number of units in the input layers. Note that with these weights, the hidden unit l, m is maximally connected to the unit l in the retinotopic layer, m in the eye position layer, and l +m in the head-centered layer. This connectivity is responsible A. H=R+G I~ct ° 6 0 0 4 00 2 0 ~ o -100 0 100 Preferred head centered lOCaIJOIl 000000000 ]Wh '"'0000'"'0 Olillillillillillil OIilIilI.H.OIil 0000000 1il1il1il1il001il R 1il1il1il1il1il01il G /W~ OOOOO \ 00 000000 00000 000 10 10 80 o 0 8 60 0 0 60 40 0 0 40 0 0 0, 2 20 0 B. ---B--,G = _100 -B- G = -100 wh>wr e,::ltJ .~ ." .:( 5 ~ 00 20 40 60 80 Retinal Loca1J. on j::lKJ o o 20 40 60 80 l ::~.·'·". 1 .:(5 ~ 00 20 40 60 80 Retmal Localton Figure 2: A- Architecture of the iterative BF map. The intermediate cells look like partially shifting cell in VIP. B- An intermediate cell's response properties when one varies the ratio Zr/Zh of modality dominance (strength of the weights). The gain of the shift varies from 0 to 1 depending of the relative strength of Wh (the auditory weights) and W r (the visual weights). for the fact that the network will compute H = R+E. This approach can generalize to arbitrary mapping M = f(R, E) if we replace Wh(i, l, m) = B((l + m) - i) by Wh(i, l, m) = B(f(l, m) - i). Activities on the inputs layers are pooled linearly on the intermediate layers, according to the connection matrices. Then these pooled inputs are squared and normalized with a divisive inhibition. The resulting activities on the intermediate layer are then sent back to the input layers, through the symmetric connections, and in turn squared and normalized. The inputs are modeled by bell-shaped distribution of activities clamped on the input layers at time O. The amplitude of these initial hills of activity represents the contrasts of the stimuli. A purely visual stimulus, for example, would have an auditory contrast of 0 on the head-centered layer. Except for very low contrasts in all modalities, the network converges toward non-zero stable states when provided with visual, auditory, or bimodal input. These stable states are stable hills of activity on the visual, auditory and postural layers, so that the position of the hill on the head-centered layer is the sum of the position of the hill on the visual layer, and the position of the hill on the postural layer . When provided with visual and postural input, the network predicts the auditory position of the stimulus. When provided with auditory and postural input, the retinotopic position can be read from the position of the stable hill on the visual layer. Thus, the network is automatically doing coordinate transforms in both directions. The whole process takes no more than 2 iterations. 4 Spatial representation in the intermediate layer The cells in the intermediate layer provide a multimodal representation of space that we can characterize and compare to neurophysiological data. We will focus on the unit's response after the network reached its stable state. The final state depends only on the position encoded in the input, which implies that the unit's responses are identical regardless of the input modality (visual, auditory or bimodal). The receptive fields in different modalities are spatially congruent, like the receptive fields of most multimodal cells in the brain. In figure 2B, we plotted for different eye positions the activity of an intermediate cell as a function of the retinotopic position of the stimulus. Note that because of the symmetries in the network, all the other intermediate cells responses are translated version of this one. The critical parameter that will govern the intermediate representation is ratio Zr/Zh that defines the relative strength of visual and auditory weights. This is the only parameter we manipulated in this study. When neither the visual nor the auditory representation dominates (that is, when Zr/Zh = 1, see figure 2B, top panel), the intermediate cell's receptive field on the retina shift with the eyes, but it does not shift as much as the eyes do. This is a partially shifting cell, gain modulated by eye position. The amount of receptive field shift with the gaze is 50%. In fact we found that this cell's response was very close to a product between a gaussian of retinotopic position, head-centered position and eye position, thus implementing the basis function we already proposed as a solution to the multi-directional computation problem. This cell looks very much like a one dimensional version of the particular VIP cell plotted in figure 1A. Varying the ratio x = i~ does not affect the performance of the network for coordinate transform (the only change occurring on the input layers is a change in the amplitude of the stable hills) but it changes the intermediate representation, particularly the amount of receptive field shift with gaze. There is a continuum between a gain modulated retinotopic cell for a high value of x ( 0% shift, figure 2B, middle panel) and a gain modulated head-centered cell for a low value of x (100% shift, figure 2B, bottom panel). This behavior is easy to understand: an intermediate cell receives tuned retinotopic, head-centered and eye position inputs. This three tuned inputs will more or less influence the unit's response, depending on their strength. Thus, the whole distribution of shifts found in VIP could belong to an iterative basis function map with varying ratio between visual and "head-centered" weights. In the case of VIP, "head-centered" would correspond to tactile, as VIP is a visuo/tactile area. On the other hand, if one modality dominates in all cells (e.g. in LIP for vision), we can predict that the distribution of responses will be displaced toward the frame of reference of this modality. 5 Lesion of the iterative basis function map In order to link the intermediate representation with spatial representations in the human parietal cortex, we studied the consequences of a lesion to this network. Unilateral right parietal lesions result in a syndrome called hemineglect: The patient is slower to react to, and has difficulty detecting stimuli in the contralesional space. This is usually coupled with extinction of leftward stimuli by rightward stimuli. Two striking characteristics of hemineglect are that it is usually in a mixture of frames of reference, challenging the view that parietal cortex is a mosaic of areas devoted to spatial processing in different frames of reference. Additionally, extinction is frequently cross-modal. For example, tactile stimuli can be extinguished by visual stimuli, suggesting that the lesioned spatial representation are themselves multimodal. We modeled a right parietal lesion by implementing a gradient of units in the intermediate layer, so that there are more cells tuned to contralateral retinotopic (visual) and contralateral head-centered (auditory) positions. This correspond to the observed hemispheric asymmetries in the monkey's brain. This modification did not strongly affect the final estimates of position by the network, but the processing was slower (taking more time to reach the stable state) and the contrast threshold (minimal visual and auditory contrasts that drives the network) was higher for the leftward retinal and head-centered locations. Thus the network "neglected" stimuli in a mixture of frames of reference: The severity of neglect gradually increased from right to left both in retinotopic and head-centered coordinates. Furthermore when we entered two simultaneous inputs to the network, we observed that the leftward stimulus was always extinguished by the rightward stimulus (the final stable state reflected only the rightward stimulus), regardless of the modality. Thus we obtained extinction of auditory stimuli by visual stimuli, and vice-versa. In our model, these two aspects of neglect (mixture of frames of reference and cross modal extinction) can be explained by a lesion in only one multimodal brain area. 6 Conclusion Our approach can be easily generalized to sensory-motor transformations. In this case, the implementation of motor control (the feedback from the motor representations to the sensory representations) will lead to intermediate cells that partially shift in the sensory as well as the motor frame of reference. This model has other (related) interesting properties that we develop elsewhere. In the presence of noisy input, it can perform optimal multi-sensory cue integration, and allows an adaptative bayesian approach to cue integration, in a biologically realistic way. Iterative basis function maps provide a new model of spatial representations and processing that can be applied to neurophysiological and neuropsychological data. References [1] R. Andersen, R. Bracewell, S. Barash, J. Gnadt, and L. Fogassi. Eye position effect on visual memory and saccade-related activity in areas LIP and 7a of macaque. Journal of Neuroscience, 10:1176-1196,1990. [2] J. Duhamel, F. Bremmer, S. BenHamed, and W. Graf. Spacial invariance of visual receptive fields in parietal cortex. Nature, 389(6653):845-848,1997. [3] M. Jay and D. Sparks. Sensorimotor integration in the primate superior colliculus:l. motor convergence. Journal of Neurophysiology, 57:22-34, 1987. [4] A. Pouget and T. Sejnowski. Spatial transformations in the parietal cortex using basis functions. Journal of Cognitive Neuroscience, 9(2), 1997. [5] B. Stricanne, P. Mazzoni, and R. Andersen. Modulation by the eye position of auditory responses of macaque area LIP in an auditory memory saccade task. In Society For Neuroscience Abstracts, page 26, Washington, D.C., 1993.
|
2000
|
118
|
1,774
|
Feature Correspondence: A Markov Chain Monte Carlo Approach Frank Dellaert, Steven M. Seitz, Sebastian Thrun, and Charles Thorpe Department of Computer Science &Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 {dellaert,seitz,thrun,cet }@cs.cmu.edu Abstract When trying to recover 3D structure from a set of images, the most difficult problem is establishing the correspondence between the measurements. Most existing approaches assume that features can be tracked across frames, whereas methods that exploit rigidity constraints to facilitate matching do so only under restricted camera motion. In this paper we propose a Bayesian approach that avoids the brittleness associated with singling out one "best" correspondence, and instead consider the distribution over all possible correspondences. We treat both a fully Bayesian approach that yields a posterior distribution, and a MAP approach that makes use of EM to maximize this posterior. We show how Markov chain Monte Carlo methods can be used to implement these techniques in practice, and present experimental results on real data. 1 Introduction Structure from motion (SFM) addresses the problem of simultaneously recovering camera pose and a three-dimensional model from a collection of images. This problem has received considerable attention in the computer vision community [1, 2, 3]. Methods that can robustly reconstruct the 3D structure of environments have a potentially large impact in many areas of societal importance, such as architecture, entertainment, space exploration and mobile robotics. A fundamental problem in SFM is data association, i.e., the question of determining correspondence between features observed in different images. This problem has been referred to as the most difficult part of structure recovery [4], and is particularly challenging if the images have been taken from widely separated viewpoints. Virtually all existing approaches assume that either the correspondence is known a priori, or that features can be tracked from frame to frame [1 , 2]. Methods based on the robust recovery of epipolar geometry [3, 4] can cope with larger inter-frame displacements, but still depend on the ability to identify a set of initial correspondences to seed the robust matching process. In this paper, we are interested in cases where individual camera images are recorded from vastly different viewpoints, which renders existing SFM approaches inapplicable. Traditional approaches for establishing correspondence between sets of 2D features [5 , 6, 7] are of limited use in this domain, as the projected 3D structure can look very different in each image. This paper proposes a Bayesian approach to data association. Instead of considering a single correspondence only (which we conjecture to be brittle), our approach considers whole distributions over correspondences. As a result, our approach is more robust, and from a Bayesian perspective it is also sound. Unfortunately, no closed-form solution exists for calculating these distributions conditioned on the camera images. Therefore, we propose to use the Metropolis-Hastings algorithm, a popular Markov chain Monte Carlo (MCMC) method, to sample from the posterior. In particular, we propose two different algorithms. The first method, discussed in Section 2, is mathematically more powerful but computationally expensive. It uses MCMC to sample from the joint distribution over both correspondences and threedimensional scene structure. While this approach is mathematically elegant from a Bayesian point of view, we have so far only been able to obtain results for simple, artificial domains. Thus, to cope with large-scale data sets, we propose in Section 3 a maximum a posteriori (MAP) approach using the Expectation-Maximization (EM) algorithm to maximize the posterior. Here we use MCMC sampling only for the data association problem. Simulated annealing is used to reduce the danger of getting stuck in local minima. Experimental results obtained in realistic domains and presented in Section 4 suggest that this approach works well in the general SFM case, and that it scales favorably to complex computer vision problems. The idea of using MCMC for data association has been used before by [8] in the context of a traffic surveillance application. However, their approach is not directly applicable to SFM, as the computer vision domain is characterized by a large number of local minima. Our paper goes beyond theirs in two important aspects: First, we develop a framework for MCMC sampling over both the data association and the model, and second, we apply annealing to smooth the posterior so as to reduce the chance to get stuck in local minima. In a previous paper [9] we have discussed the idea of using EM for SFM, but without the unifying framework presented below. 2 A Fully Bayesian Approach using MCMC Below we derive the general approach for MCMC sampling from the joint posterior over data association and models. We only show results for a simple example from pose estimation, as this approach is computationally very demanding. An EM approach based on the general principles described here, but applicable to largerscale problems, will be described in the next section. 2.1 Structure from Motion The structure from motion problem is this: given a set of images of a scene, taken from different viewpoints, recover the 3D structure of the scene along with the camera parameters. In the feature-based approach to SFM, we consider the situation in which a set of N 3D features Xj is viewed by a set of m cameras with parameters mi. As input data we are given the set of 2D measurements Uik in the images, where k E {l..Ki} and Ki is the number of measurements in the i-th image. To model correspondence information, we introduce for each measurement Uik the indicator variable j ik, indicating that Uik is a measurement of the jik-th feature Xj,k. The choice of feature type and camera model determines the measurement function h(mi,Xj), predicting the measurement Uik given mi and Xj (with j =jik): Uik = h(mi, Xj) + n where n is the measurement noise. Without loss of generality, let us consider the case in which the features Xj are 3D points and the measurements Uik are points in the 2D image. In this case the measurement function can be written as a 3D rigid displacement followed by a projection: (1) where Ri and ti are the rotation matrix and translation of the i-th camera, respectively, and <J> : ~ 3 -+ ~ 2 is the camera projection model. 2.2 Deriving the Posterior Whereas previous methods single out a single "best" correspondence across images, in a Bayesian framework we are interested in characterizing our knowledge about the unknowns conditioned on the data only, averaging over all possible correspondences. Thus, we are interested in the posterior distribution P(OIU), where 0 collects the unknown model parameters mi and Xj. In the case of unknown correspondence, we need to sum over all possible assignments J = {jik} to obtain P(O IU) = L:P(J,OIU) ex P(O) L:P(UIJ,O)P(JIO) (2) J J where we have applied Bayes law and the chain rule. Let us assume for now that there are no occlusions or spurious measurements, so that Ki = Nand J is a set of m permutations J i of the indices l..N. Then, assuming i.i.d. normally distributed noise on the measurements, each term in (2) can be calculated using (3) if each J i is a permutation, and a otherwise. Here N(.; J-L, 0") denotes the normal distribution with mean J-L and standard deviation 0". The first identity in (3) holds if we assume each of the N! possible permutations to be equally likely a priori. 2.3 Sampling from the Posterior using MCMC Unfortunately, direct computation of the total posterior distribution P(OIU) in (2) is intractable in general, because the number of correspondence assignments J is combinatorial in the number of features and images. As a solution to this computational challenge we propose to instead sample from P(O IU). Sampling directly from P(O IU) is equally difficult, but if we can obtain a sample {(o(r) , J(r))} from the joint distribution P (0, J I U), we can simply discard the correspondence part J(r) to obtain a sample {o(r)} from the marginal distribution P(OIU). To sample from the joint distribution P(O, JIU) we propose to use MCMC sampling, in particular the Metropolis-Hastings algorithm [10]. This method involves simulating a Markov chain whose equilibrium distribution is the desired posterior distribution P(O,JIU). Defining X~ (J,O), the algorithm is: l. Start with a random initial state X(O). 2. Propose a new state X' using a chosen proposal density Q(X'; X(r)). 3. Compute the ratio P(X'IU) Q(X(r); X') a --'---;-;--'-- ---'----,--,,-'p(X(r)IU) Q(X';X(r)) (4) 4. Accept X' as X(r+1) with probability min(a, 1), otherwise X(r+1) = X(r). • X3 ' X2 [~~---Ci: " ! · X X ' Z r \ I 3 3 , Z ! ~~~X: "5 6 ~ Figure 1: Left: A 2D model shape, defined by the 6 feature points XJ' Right: Transformed shape (by a simple rotation) and 6 noisy measurements Uk of the transformed features. The true rotation is 70 degrees, noise is zero-mean Gaussian. The sequence of tuples (e(r), J(r)) thus generated will be a sample from p(e, J IV), if the sampler is run sufficiently long. To calculate the acceptance ratio a, we assume that the noise on the feature measurements is normally distributed and isotropic. Using Bayes law and eq. (3), we can then rewrite a from (4) as n::1 n~~l N(U;k; h(m~, xj,,), 0") Q(X(r); X') a = -n-m-n-::CK =-' -N-(--' h-( ----;-C(r )"""-='(""'r) -) -) Q (X' . X( r) ) ;=1 k=l Uik, m; , Xj,k ,0" , Simplifying the notation by defining h~~) ~ h(mY), xt~), we obtain _ Q(X(r); X') [1 '" (r) 2 I 2] a Q(X/;X(r)) exp 20"2 f;'(lluik -hik II -lluik -hik)11 ) (5) The proposal density Q(.; .) is application dependent, and an example is given below. 2.4 Example: A 2D Pose Estimation Problem To illustrate this method, we present a simple example from pose estimation. Assume we have a 2D model shape, given in the form of a set of 2D points Xj, as shown in Figure l. We observe an image of this shape which has undergone a rotation e to be estimated. This rotated shape is shown at right in the figure, along with 6 noisy measurements Uk on the feature points. In Figure 2 at left we show the posterior distribution over the rotation parameter, given the measurements from Figure 1 and with known correspondence. In this case, the posterior is unimodal. In the case of unknown correspondence, the posterior conditioned on the data alone is shown at right in Figure 2 and is a mixture of 6!=720 functions of the form (3), with 6 equally likely modes induced by the symmetry of the model shape. In order to perform MCMC sampling, we implement the proposal step by choosing randomly between two strategies. (a) In a "small perturbation" we keep the correspondence assignment J but add a small amount of noise to e. This serves to explore the values of e within a mode of the posterior probability. (b) In a "long jump" , we completely randomize both e and J. This provides a way to jump between probability modes. Note that Q(X(r); X') /Q(X/; X(r)) = 1 for this proposal density. The result of the sampling procedure is shown as a histogram of the rotation parameter e in Figure 3. The histogram is a non-parametric approximation to the analytic posterior shown in Figure 2. The figure shows the results of running a sampler for 100,000 steps, the first 1000 of which were discarded as a transient. Note that even for this simple example, there is still considerable correlation in the sample '. "' \ '. J \ ) \ Figure 2: (Left) The posterior distribution over rotation B with known correspondence, and (Right) with unknown correspondence, a mixture with 720 components. Figure 3: Histogram for the values of B obtained in one MeMe run, for the situation in Figure 1. The MeMe sampler was run for 100,000 steps. of 100,000 states as evidenced by the uneven mass in each of the 6 analytically predicted modes. 3 Maximum a Posteriori Estimation using MCEM As illustrated above, sampling from the joint probability over assignments J and parameters 0 using MCMC can be very expensive. However, if only a maximum a posteriori (MAP) estimate is needed, sampling over the joint space can be avoided by means of the EM algorithm. To obtain the MAP estimate, we need to maximize P(OIU) as given by (2). This is intractable in general because of the combinatorial number of terms. The EM algorithm provides a tractable alternative to maximizing P (0 I U), using the correspondence J as a hidden variable [ll). It iterates over: E-step: Calculate the expected log-posterior Qt(0): Qt(0) ~ Eo,{logP(OIU, J)IU} = L P(J IU, ot)logP(O IU , J) (6) J where the expectation is taken with respect to the posterior distribution P(JIU, ot) over all possible correspondence assignments J given the measurement data U and a current guess ot for the parameters. M-step: Re-estimate OtH by maximizing Qt(0), i.e., OtH = argmax 0 Qt(0) Instead of calculating Qt (0) exactly using (6) , which again involves summing over a combinatorial number of terms, we can replace it by a Monte Carlo approximation: R Qt (0) i=::j ~ LlogP(O IU,J(r)) r=l (7) where {J(r)} is a sample from P(J IU , ot) obtained by MCMC sampling. Formally this can be justified in the context of a Monte Carlo EM or MCEM, a version Figure 4: Three out of 11 cube images. Although the images were originally taken as a sequence in time, the ordering of the images is irrelevant to our method. 1=0 0"=0.0 1=1 u=25.1 1=10 u=18.7 1=20 u=13.5 1=100 u=1 .0 Figure 5: Starting from random structure (t=O) we recover gross 3D structure in the very first iteration (t=l). As the annealing parameter (1' is gradually decreased, successively finer details are resolved (iterations 1,10,20, and 100 are shown). of the EM algorithm where the E-step is executed by a Monte-Carlo process [11]. The sampling proceeds as in the previous section, using the Metropolis-Hastings algorithm, but now with a fixed parameter f) = f)t. Note that at each iteration the estimate f)t changes and we sample from a different posterior distribution P(JIU, f)t). In practice it is important to add annealing to this basic EM scheme, to avoid getting stuck in local minima. In simulated annealing we artificially increase the noise parameter (T for the early iterations, gradually decreasing it to its correct value. This has two beneficial consequences. First, the posterior distribution P(JIU, f)t) is less peaked when (T is high, allowing the MCMC sampler to explore the space of assignments J more easily. Second, the expected log-posterior Qt (e) is smoother and has fewer local maxima for higher values of (T. 4 Results To validate our approach we have conducted a number of experiments, one of which is presented here. The input data in this experiment consisted of 55 manually selected measurements in each of 11 input images, three of which are shown in Figure 4. Note that features are not tracked from frame to frame and the images can be presented in arbitrary order. To initialize the 11 cameras mi are all placed at the origin, looking towards the 55 model points Xj, who themselves are normally distributed at unit distance from the cameras. We used an orthographic projection model. The EM algorithm was run for 100 iterations, and the sampler for 10000 steps per image. For this data set the algorithm took about a minute to complete on a standard PC. The algorithm converges consistently and fast to an estimate for the structure and motion where the correct correspondence is the most probable one, and where all assignments in the different images agree with each other. A typical run of the algorithm is shown in Figure 5, where we have shown a wireframe model of the recovered structure at several points during the run. There are two important points to note: (a) the gross structure is recovered in the very first iteration, starting from random initial structure, and (b) finer details of the structure are gradually resolved as the annealing parameter (T is decreased. The estimate for the structure after convergence is almost identical to the one found by the factorization method [1] when this is provided with the correct correspondence. 5 Conclusions and Future Directions In this paper we presented a theoretically sound method to deal with ambiguous feature correspondence, and have shown how Markov chain Monte Carlo sampling can be used to obtain practical algorithms. We have detailed this for two cases: (1) obtaining a posterior distribution over the parameters fJ, and (2) obtaining a MAP estimate by means of EM. In future work, we would like to apply these methods in other domains where data association plays a central role. In particular, in the highly active area of mobile robot mapping, the data association problem is currently a major obstacle to building large-scale maps [12, 13]. We conjecture that our approach is equally applicable to the robotic mapping problem, and can lead to qualitatively new solutions in that domain. References [1] C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: a factorization method. Int. J. of Computer Vision, 9(2):137- 154, Nov. 1992. [2] R.1. Hartley. Euclidean reconstruction from uncalibrated views. In Application of Invariance in Computer Vision, pages 237-256, 1994. [3] P.A. Beardsley, P.H.S. Torr, and A. Zisserman. 3D model acquisition from extended image sequences. In E'l.w. Conf. on Computer Vision (ECCV), pages 11:683-695, 1996. [4] P. Torr, A. Fitzgibbon, and A. Zisserman. Maintaining multiple motion model hypotheses over many views to recover matching and structure. In Int. Con/. on Computer Vision (ICC V), pages 485-491, 1998. [5] G.L. Scott and H.C. Longuet-Higgins. An algorithm for associating the features of two images. Proceedings of Royal Society of London, B-244:21-26, 1991. [6] L.S. Shapiro and J.M. Brady. Feature-based correspondence: An eigenvector approach. Image and Vision Computing, 10(5):283-288, June 1992. [7] S. Gold, A. Rangaraj an , C. Lu, S. Pappu, and E. Mjolsness. New algorithms for 2D and 3D point matching. Pattern Recognition, 31(8):1019-1031, 1998. [8] H. Pasula, S. Russell, M. Ostland, and Y. Ritov. Tracking many objects with many sensors. In Int. Joint Con/. on Artificial Intelligence (IlCAI), Stockholm, 1999. [9] F. Dellaert, S.M. Seitz, C.E. Thorpe, and S. Thrun. Structure from motion without correspondence. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), June 2000. [10] W.R. Gilks, S. Richardson, and D.J. Spiegelhalter, editors. Markov chain Monte Carlo in practice. Chapman and Hall, 1996. [11] M.A. Tanner. Tools for Statistical Inference. Springer, 1996. [12] J.J. Leonard and H.J.S. Feder. A computationally efficient method for large-scale concurrent mapping and localization. In Proceedings of the Ninth International Symposium on Robotics Research, Salt Lake City, Utah, 1999. [13] J.A. Castellanos and J.D. Tard6s. Mobile Robot Localization and Map Building; A Multisensor Fusion Approach. Kluwer Academic Publishers, Boston, MA, 2000.
|
2000
|
119
|
1,775
|
An Information Maximization Approach to Overcomplete and Recurrent Representations Oren Shriki and Haim Sompolinsky Racah Institute of Physics and Center for Neural Computation Hebrew University Jerusalem, 91904, Israel Abstract Daniel D. Lee Bell Laboratories Lucent Technologies Murray Hill, NJ 07974 The principle of maximizing mutual information is applied to learning overcomplete and recurrent representations. The underlying model consists of a network of input units driving a larger number of output units with recurrent interactions. In the limit of zero noise, the network is deterministic and the mutual information can be related to the entropy of the output units. Maximizing this entropy with respect to both the feedforward connections as well as the recurrent interactions results in simple learning rules for both sets of parameters. The conventional independent components (ICA) learning algorithm can be recovered as a special case where there is an equal number of output units and no recurrent connections. The application of these new learning rules is illustrated on a simple two-dimensional input example. 1 Introduction Many unsupervised learning algorithms such as principal component analysis, vector quantization, self-organizing feature maps, and others use the principle of minimizing reconstruction error to learn appropriate features from multivariate data [1, 2]. Independent components analysis (ICA) can similarly be understood as maximizing the likelihood of the data under a non-Gaussian generative model, and thus is related to minimizing a reconstruction cost [3, 4, 5]. On the other hand, the same ICA algorithm can also be derived without regard to a particular generative model by maximizing the mutual information between the data and a nonlinearly transformed version of the data [6]. This principle of information maximization has also been previously applied to explain optimal properties for single units, linear networks, and symplectic transformations [7, 8, 9]. In these proceedings, we show how the principle of maximizing mutual information can be generalized to overcomplete as well as recurrent representations. In the limit of zero noise, we derive gradient descent learning rules for both the feedforward and recurrent weights. Finally, we show the application of these learning rules to some simple illustrative examples. M output variables N input variables Figure 1: Network diagram of an overcomplete, recurrent representation. x are input data which influence the output signals s through feedforward connections W. The signals s also interact with each other through the recurrent interactions K. 2 Information Maximization The "Infomax" formulation of leA considers the problem of maximizing the mutual information between N-dimensional data observations {x} which are input to a network resulting in N-dimensional output signals {s} [6]. Here, we consider the general problem where the signals s are M -dimensional with M ~ N. Thus, the representation is overcomplete because there are more signal components than data components. We also consider the situation where a signal component Si can influence another component Sj through a recurrent interaction Kji. As a network, this is diagrammed in Fig. 1 with the feedforward connections described by the M x N matrix Wand the recurrent connections by the M x M matrix K. The network response s is a deterministic function of the input x: (1) where 9 is some nonlinear squashing function. In this case, the mutual information between the inputs x and outputs s is functionally only dependent on the entropy of the outputs: J(s, x) = H(s) - H(slx) '" H(s). (2) The distribution of s is aN-dimensional manifold embedded in aM-dimensional vector space and nominally has a negatively divergent entropy. However, as shown in Appendix 1, the probability density of s can be related to the input distribution via the relation: P(s) ex: P(x) y!det(xTx) where the susceptibility (or Jacobian) matrix X is defined as: OSi Xij =~. uXj (3) (4) This result can be understood in terms of the singular value decomposition (SVD) of the matrix x. The transformation performed by X can be decomposed into a series of three transformations: an orthogonal transformation that rotates the axes, a diagonal transformation that scales each axis, followed by another orthogonal transformation. A volume element in the input space is mapped onto a volume element in the output space, and its volume change is described by the diagonal scaling operation. This scale change is given by the product of the square roots of the eigenvalues of XT X. Thus, the relationship between the probability distribution in the input and output spaces includes the proportionality factor, y'det(xTx), as formally derived in Appendix 1. We now get the following expression for the entropy of the outputs: H(s) '" -I dxP(x) log ( P(x) ) = -21 (logdet(xT X)) + H(x), (5) y'det(xTx) where the brackets indicate averaging over the input distribution. 3 Learning rules From Eq. (5), we see that minimizing the following cost function: 1 E = -"2Tr(log(XTX)), (6) is equivalent to maximizing the mutual information. We first note that the susceptibility X satisfies the following recursion relation: Xij = g~ . (Wij + ~ KikXkj ) = (GW + GKX)ij, (7) where Gij = 8ijg~ and g~ == g' (Lj WijXj + Lk KikSk) . Solving for X in Eq. (7) yields the result: X = (G-1 - K)-1W = <]>W, (8) where <]>-1 == G-1 - K. <]>ij can be interpreted as the sensitivity in the recurrent network of the ith unit's output to changes in the total input of the jth unit. We next derive the learning rules for the network parameters using gradient descent, as shown in detail in Appendix 2. The resulting expression for the learning rule for the feedforward weights is: 8E ~W = -'f/- = 'f/ (rT + <]>T 'YxT) 8W where'f/ is the learning rate, the matrix r is defined as r = (XT X)-1 XT <]> and the vector 'Y is given by (9) (0) l' 'Yi = (Xr)ii (g~t)3 . (11) Multiplying the gradient in Eq. (9) by the matrix (WWT) yields an expression analogous to the "natural" gradient learning rule [10]: ~W = 'f/W (I + (XT 'YxT)) . (2) Similarly, the learning rule for the recurrent interactions is 8E ~K = -'f/ 8K = 'f/ ((xrf + <]>T 'YsT) . (13) In the case when there are equal numbers of input and output units, M = N, and there are no recurrent interactions, K = 0, most of the previous expressions simplify. The susceptibility matrix X is diagonal, <]> = G, and r = W- 1 . Substituting back into Eq. (9) for the learning rule for W results in the update rule: ~W = 'f/ [(WT )-1 + (zxT)] , (14) where Zi = gr / g~. Thus, the well-known Infomax leA learning rule is recovered as a special case ofEq. (9) [6]. (a) (b) (c) Figure 2: Results of fitting 3 filters to a 2-dimensional hexagon distribution with 10000 sample points. 4 Examples We now apply the preceding learning algorithms to a simple two-dimensional (N = 2) input example. Each input point is generated by a linear combination of three (twodimensional) unit vectors with angles of 00 , 1200 and 2400 • The coefficients are taken from a uniform distribution on the unit interval. The resulting distribution has the shape of a unit hexagon, which is slightly more dense close to the origin than at the boundaries. Samples of the input distribution are shown in Fig. 2. The second order cross correlations vanish, so that all the structure in the data is described only by higher order correlations. We fix the sigmoidal nonlinearity to be g(x} = tanh(x}. 4.1 Feedforward weights A set of M = 3 overcomplete filters for W are learned by applying the update rule in Eq. (9) to random normalized initial conditions while keeping the recurrent interactions fixed at K = O. The length of the rows of W were constrained to be identical so that the filters are projections along certain directions in the two-dimensional space. The algorithm converged after about 20 iterations. Examples of the resulting learned filters are shown by plotting the rows of W as vectors in Fig. 2. As shown in the figure, there are several different local minimum solutions. If the lengths of the rows of Ware left unconstrained, slight deviations from these solutions occur, but relative orientation differences of 600 or 1200 between the various filters are preserved. 4.2 Recurrent interactions To investigate the effect of recurrent interactions on the representation, we fixed the feedforward weights in W to point in the directions shown in Fig. 2(a), and learned the optimal recurrent interactions K using Eq. (13). Depending upon the length of the rows of W which scaled the input patterns, different optimal values are seen for the recurrent connections. This is shown in Fig. 3 by plotting the value of the cost function against the strength of the uniform recurrent interaction. For small scaled inputs, the optimal recurrent strength is negative which effectively amplifies the output signals since the 3 signals are negatively correlated. With large scaled inputs, the optimal recurrent strength is positive which tend to decrease the outputs. Thus, in this example, optimizing the recurrent connections performs gain control on the inputs. 3 2.5 ' 2 · .... 1.5 C/) o 1 U 0.5 O· -0.5 . -1 IWI=1 · · · · IWI=5 -1.5 -1 -0.5 0 0.5 1.5 k Figure 3: Effect of adding recurrent interactions to the representation. The cost function is plotted as a function of the recurrent interaction strength, for two different input scaling parameters. 5 Discussion The learned feedforward weights are similar to the results of another ICA model that can learn overcomplete representations [11]. Our algorithm, however, does not need to perform approximate inference on a generative model. Instead, it directly maximizes the mutual information between the outputs and inputs of a nonlinear network. Our method also has the advantage of being able to learn recurrent connections that can enhance the representational power of the network. We also note that this approach can be easily generalized to undercomplete representations by simply changing the order of the matrix product in the cost function. However, more work still needs to be done in order to understand technical issues regarding speed of convergence and local minima in larger applications. Possible extensions of this work would be to optimize the nonlinearity that is used, or to adaptively change the number of output units to best match the input distribution. We acknowledge the financial support of Bell Laboratories, Lucent Technologies, and the US-Israel Binational Science Foundation. 6 Appendix 1: Relationship between input and output distributions In general, the relation between the input and output distributions is given by P(s) = ! dxP(x)P(slx). (15) Since we use a deterministic mapping, the conditional distribution of the response given the input is given by P(slx) = 8(s - g(Wx + Ks)). By adding independent Gaussian noise to the responses of the output units and considering the limit where the variance of the noise goes to zero, we can write this term as P(slx) = lim 1 e-~lls-g(Wx+Ks)112 6.-+0 (2?r~2)N/2 (16) The output space can be partitioned into those points which belong to the image of the input space, and those which are not. For points outside the image of the input space, P(s) = O. Consider a point s inside the image. This means that there exists Xo such that s = g(Wxo + Ks). For small~, we can expand g(Wx + Ks) - s ::::: X8x, where X is P(slx) (17) The expression in the square brackets is a delta function in x around Xo. Using Eq. (15) we finally get P(s) = P(x) O(s) Jdet(xTx) (18) where the characteristic function O(s) is 1 if s belongs to the image of the input space and is zero otherwise. Note that for the case when X is a square matrix (M = N), this expression reduces to the relation P(s) = P(x) II det(x)l. 7 Appendix 2: Derivation of the learning rules To derive the appropriate learning rules, we need to calculate the derivatives of E with respect to some set of parameters A. In general, these derivatives are obtained from the expression: 7.1 Feedforward weights In order to derive the learning rule for the weights W, we first calculate OXab " ( OWeb o~ ae) " o~ ae OWlm = "S: ~ae OWlm + OWlm Web = ~al6bm + "S: OWlm Web· (20) From the definition of ~, we see that: and O~ae __ ,,~ . oGi:/~. OWlm L.J at OWlm Je tJ oGi/ _ 6ij og~ _ 6 g~' OSi OWlm - - (gD 2 OWlm - ij (gD3 OWlm ' where g~' == g" (Lj WijXj + Lk KikSk). The derivatives of s also satisfy a recursion relation similar to Eq. (7): OSi I ( "OSj ) OWlm = gi' 6Uxm + 7 Kij OWlm ' which has the solution: (21) (22) (23) (24) Putting all these results together in Eq. (19) and taking the trace, we get the gradient descent rule in Eq. (9). 7.2 Recurrent interactions To derive the learning rules for the recurrent weights K, we first calculate the derivatives of Xab with respect to Kim: OXab '"" o<1>ae '"" o<1>ijl oK = ~ oK Web = - ~ <1>ai OK <1>jeWeb. 1m e 1m e,i,j 1m (25) From the definition of <1>, we obtain: 0<1> ij 1 6ij 0 g~ £lK = - -( ')2 £lK - 6il6jm. u 1m gi u 1m (26) The derivatives of g' are obtained from the following relations: (27) and (28) which results from a recursion relation similar to Eq. (23). Finally, after combining these results and calculating the trace, we get the gradient descent learning rule in Eq. (13). References [1] Jolliffe, IT (1986). Principal Component Analysis. New York: Springer-Verlag. [2] Hayldn, S (1999). Neural networks: a comprehensive foundation. 2nd ed., Prentice-Hall, Upper Saddle River, NJ. [3] Jutten, C & Herault, J (1991). Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture. Signal Processing 24,1-10. [4] Hinton, G & Ghahramani, Z (1997). Generative models for discovering sparse distributed representations. Philosophical Transactions Royal Society B 352, 1177-1190. [5] Pearlmutter, B & Parra, L (1996). A context-sensitive generalization of ICA. In ICONIP'96, 151-157. [6] Bell, AJ & Sejnowsld, TJ (1995). An information maximization approach to blind separation and blind deconvolution. Neural Comput. 7, 1129- 1159. [7] Barlow, HB (1989). Unsupervised learning. Neural Comput. 1,295-311. [8] Linsker, R (1992). Local synaptic learning rules suffice to maximize mutual information in a linear network. Neural Comput. 4,691-702. [9] Parra, L, Deco, G, & Miesbach, S (1996). Statistical independence and novelty detection with information preserving nonlinear maps. Neural Comput. 8,260-269. [10] Amari, S, Cichocld, A & Yang, H (1996). A new learning algorithm for blind signal separation. Advances in Neural Information Processing Systems 8, 757-763. [11] Lewicki, MS & Sejnowsld, TJ (2000). Learning overcomplete representations. Neural Computation 12 337- 365.
|
2000
|
12
|
1,776
|
Stagewise processing in error-correcting codes and image restoration K. Y. Michael Wong Department of Physics, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong phkywong@ust.hk Hidetoshi Nishimori Department of Physics, Tokyo Institute of Technology, Oh-Okayama, Meguro-ku, Tokyo 152-8551, Japan nishi@stat.phys.titech.ac.jp Abstract We introduce stagewise processing in error-correcting codes and image restoration, by extracting information from the former stage and using it selectively to improve the performance of the latter one. Both mean-field analysis using the cavity method and simulations show that it has the advantage of being robust against uncertainties in hyperparameter estimation. 1 Introduction In error-correcting codes [1] and image restoration [2], the choice of the so-called hyperparameters is an important factor in determining their performances. Hyperparameters refer to the coefficients weighing the biases and variances of the tasks. In error correction, they determine the statistical significance given to the paritychecking terms and the received bits. Similarly in image restoration, they determine the statistical weights given to the prior knowledge and the received data. It was shown, by the use of inequalities, that the choice of the hyperparameters is optimal when there is a match between the source and model priors [3]. Furthermore, from the analytic solution of the infinite-range model and the Monte Carlo simulation of finite-dimensional models, it was shown that an inappropriate choice of the hyperparameters can lead to a rapid degradation of the tasks. Hyperparameter estimation is the subject of many studies such as the "evidence framework" [4]. However, if the prior models the source poorly, no hyperparameters can be reliable [5]. Even if they can be estimated accurately through steady-state statistical measurements, they may fluctuate when interfered by bursty noise sources in communication channels. Hence it is equally important to devise decoding or restoration procedures which are robust against the uncertainties in hyperparameter estimation. Here we introduce selective freezing to increase the tolerance to uncertainties in hyperparameter estimation. The technique has been studied for pattern reconstruction in neural networks, where it led to an improvement in the retrieval precision, a widening of the basin of attraction, and a boost in the storage capacity [6]. The idea is best illustrated for bits or pixels with binary states ±1, though it can be easily generalized to other cases. In a finite temperature thermodynamic process, the binary variables keep moving under thermal agitation. Some of them have smaller thermal fluctuations than the others, implying that they are more certain to stay in one state than the other. This stability implies that they have a higher probability to stay in the correct state for error-correction or image restoration tasks, even when the hyperparameters are not optimally tuned. It may thus be interesting to separate the thermodynamic process into two stages. In the first stage we select those relatively stable bits or pixels whose time-averaged states have a magnitude exceeding a certain threshold. In the second stage we subsequently fix (or freeze) them in the most probable thermodynamic states. Thus these selectively frozen bits or pixels are able to provide a more robust assistance to the less stable bits or pixels in their search for the most probable states. The two-stage thermodynamic process can be studied analytically in the mean-field model using the cavity method. For the more realistic cases of finite dimensions in image restoration, simulation results illustrate the relevance of the infinite-range model in providing qualitative guidance. Detailed theory of selective freezing is presented in [7]. 2 Formulation Consider an information source which generates data represented by a set of Ising spins {~i}' where ~i = ±1 and i = 1" .. ,N. The data is generated according to the source prior Ps ( {~d ). For error-correcting codes transmitting unbiased messages, all sequences are equally probable and Ps({O) = 2-N. For images with smooth structures, the prior consists of ferromagnetic Boltzmann factors, which increase the tendencies of the neighboring spins to stay at the same spin states, that is, Ps ( {~}) ex exp (~ 2: ~i~j) . (ij) (1) Here (ij) represents pairs of neighboring spins, z is the valency of each site. The data is coded by constructing the codewords, which are the products of p spins JR ... ip = ~il ... ~ip for appropriately chosen sets of of indices {il' ... , ip}. Each spin may appear in a number of p-spin codewords; the number of times of appearance is called the valency zp. For conventional image restoration, codewords with only p = 1 are transmitted, corresponding to the pixels in the image. When the signal is transmitted through a noisy channel, the output consists of the sets {Jh ... ip} and {Ti}, which are the corrupted versions of {JR"'ip} and {~i} respectively, and described by the output probability Pout ({ J}, {T} I{ 0) ex exp (/h 2: Jil ... ip~il ... ~ip + (31' 2: Ti~i) . (2) According to Bayesian statistics, the posterior probability that the source sequence is {(T}, given the outputs {J} and {T}, takes the form P( {(T }I{J}, {T}) ex exp ((3J 2: Jil .. ·ip(Til ... (Tip + (31' 2: TiO'i + ~ 2: (TiO'j) . (ij) (3) If the receiver at the end of the noisy channel does not have precise information on (3J, (3r or (3s, and estimates them as (3, hand (3m respectively, then the ith bit of the decoded/restored information is given by sgn(ai}, where Thaie-H{u} (ai} = The-H{u} , (4) and the Hamiltonian is given by H{a} = -(3" J - a- - - -a - - h " T.-a- (3m " a-a (5) L...J 11 ---1p 11 1p L...J 1 1 L...J 1 JZ (ij) For the two-stage process of selective freezing, the spins evolve thermodynamically as prescribed in Eq_ (4) during the first stage, and the thermal averages (ai} of the spins are monitored_ Then we select those spins with l(ai}1 exceeding a given threshold (), and freeze them in the second stage of the thermodynamics_ The average of the spin ai in the second stage is then given by _ Thai IIj [6 ((aj}2 - ()2) 8Uj ,sgn(uj) + 6 (()2 - (aj }2)] e-ii{u} (ai} = , (6) Th IIj [6 ((aj}2 ()2) 8uj ,sgn(uj) + 6 (()2 (aj}2)] e-H{u} where 6 is the step function, iI {a} is the Hamiltonian for the second stage, and has the same form as Eq. (5) in the first stage. One then regards sgn(ai} as the ith spin of the decoding/restoration process. The most important quantity in selective freezing is the overlap of the decoded/restored bit sgn(ai} and the original bit ei averaged over the output probability and the spin distribution. This is given by Msf = LITj dJITj dTPS ({O)Pout ({J}, {T}I{O)eisgn(ai}. (7) ~ Following [3], we can prove that selective freezing cannot outperform the single-stage process if the hyperparameters can be estimated precisely. However, the purpose of selective freezing is rather to provide a relatively stable performance when the hyperparameters cannot be estimated precisely. 3 Modeling error-correcting codes Let us now suppose that the output of the transmission channel consists of only the set of p-spin interactions {Jh ---ip }. Then h = 0 in the Hamiltonian (5), and we set (3m = 0 for the case that all messages are equally probable. Analytical solutions are available for the infinite-range model in which the exchange interactions are present for all possible pairs of sites. Consider the noise model in which Jh ---ip is Gaussian with mean p!joeh .. -eip/NP-1 and variance p!J2/2NP-l. We can apply a gauge transformation a i -+ a iei and Jh ---ip -+ Ji1 ---ip ei1 ... eip, and arrive at an equivalent p-spin model with a ferromagnetic bias, where (8) The infinite-range model is exactly solvable using the cavity method [8]. The method uses a self-consistency argument to consider what happens when a spin is added or removed from the system. The central quantity in this method is the cavity field, which is the local field of a spin when it is added to the system, assuming that the exchange couplings act only one-way from the system to the new spin (but not from the spin back to the system). Since the exchange couplings feeding the new spin have no correlations with the system, the cavity field becomes a Gaussian variable in the limit of large valency. The thermal average of a spin, say spin 1, is given by (9) where h1 is the cavity field obeying a Gaussian distribution, whose mean and variance are pjomp - 1 and pJ2qP-1 /2 respectively, where m and q are the magnetization and Edwards-Anderson order parameter respectively, given by _1", _1", 2 m = N ~(Ui) and q = N ~(Ui) . i i (10) Applying self-consistently the cavity argument to all terms in Eq. (10), we can obtain self-consistent equations for m and q. Now we consider selective freezing. If we introduce a freezing threshold () so that all spins with (Ui)2 > (}2 are frozen, then the freezing fraction f is given by (11) The thermal average of a dynamic spin in the second stage is related to the cavity fields in both stages, say, for spin 1, (0"1) = tanh,B {h1 + ~(p - 1)J2rP-2Xtr tanh ,Bh1 } , (12) where 11,1 is the cavity field in the second stage, r is the order parameter describing the spin correlations of the two thermodynamic stages: r == ~ L(Ui) {(ui)8 [(}2 - (Ui)2] + sgn(Ui)8 [(Ui)2 - (}2]} , (13) i Xtr is the trans-susceptibility which describes the response of a spin in the second stage to variations of the cavity field in the first stage, namely _ 1 '" {)(Ui) Xtr= N~~· i t (14) The cavity field 11,1 is a Gaussian variable. Its mean and variance are pjoinp - 1 and pJ2ijP-1/2 respectively, where in and ij are the magnetization and EdwardsAnderson order parameter respectively, given by in = ~ L [8((}2 - (Ui)2)(Ui) + 8((Ui)2 - (}2)sgn(ui)] , (15) i ij _ ~ L [8((}2 - (Ui)2)(Ui)2 + 8((Ui)2 - (}2)] . (16) i Furthermore, the covariance between h1 and 11,1 is pJ2rP- 1/2, where r is given in Eq. (13). Applying self-consistently the same cavity argument to all terms in Eqs. (15), (16), (13) and (14), we arrive at the self-consistent equations for in, ij, rand Xtr. The performance of selective freezing is measured by Msf == ~ L [8((}2 - (Ui)2)sgn(ui) + 8((Ui)2 - (}2)sgn(ui)] . (17) i 0.93 0.91 ~ :'-<-s. 0.89 '2" 0.87 G-----<J 1=0 6 -II 1=0.7 ~ ---0 1=0.8 0.85 [3--- <J 1=0.9 0.83 0.0 0.2 0.4 0.6 0.8 1.0 T 0.95 0.94 • '2 0.93 0.92 0.3 G-----<J 1=0 fr -- -6 1=0.7 0----0 1=0.8 0 1=0.9 0.6 0.9 T 1.2 1.5 Figure 1: The overlap Msf as a function of the decoding temperature T for various given values of freezing fraction f. In this and the following figure, f = 0 corresponds to one-stage decoding/restoration. (a) Theoretical results for p = 3, jo = 0.8 and J = 1; (b) results of Monte Carlo simulations for p = 2 and jo = J = 1. In the example in Fig. l(a), the overlap of the single-stage dynamics reaches its maximum at the Nishimori point TN = J2/2jo as expected. We observe that the tolerance against variations in T is enhanced by selective freezing both above and below the optimal temperature (see especially f = 0.8). This shows that the region of advantage for selective freezing is even broader than that discussed in [7], where improvement is only observed above the optimal temperature. The advantages of selective freezing are confirmed by Monte Carlo simulations shown in Fig. l(b). For one-stage dynamics, the overlap is maximum at the Nishimori point (TN = 0.5) as expected. However, it deterriorates rather rapidly when the decoding temperature increases. In contrast, selective freezing maintains a more steady performance, especially when f = 0.9. 4 Modeling image restoration In conventional image restoration problems, a given degraded image consists of the set of pixels {Ti}, but not the set of exchange interactions {Ji1 ,. .. ,ip }. In this case, (3 = 0 in the Hamiltonian (5). The pixels Ti are the degraded versions of the source pixels ei, corrupted by noise which, for convenience, is assumed to be Gaussian with mean aei and variance T2. In turn, the source pixels satisfy the prior distribution in Eq. (1) for smooth images. Analysis of the mean-field model with extensive valency shows that selective freezing performs as well as one-stage dynamics, but cannot outperform it. Nevertheless, selective freezing provides a rather stable performance when the hyperparameters cannot be estimated precisely. Hence we model a situation common in modern communication channels carrying multimedia traffic, which are often bursty in nature. Since burstiness results in intermittent interferences, we consider a distribution of the degraded pixels with two Gaussian components, each with its own characteristics, 0.92 0.92 .-- -0.90 0.91 il 0.88 t ~ , ; G---€) f=O \ t ~ " 9 f=0.3 l' G---€) f=O ~; 0 ······ 0 f=0.1 • ---0 f=0.3 1>---" f=0.5 0.86 ... - .., f=0.5 0.90 <t---<1 f=0.7 1>---" f=0.7 9 9 f=0.9 .---... f=0.9 0.84 o f=0.95 0.89 0.82 0.0 0.5 1.0 1.5 2.0 0 3 4 7 h Tm Figure 2: (a) The performance of selective freezing with 2 components of Gaussian noise at fJs = 1.05, II = 4/2 = 0.8, a1 = 5a2 = 1 and T1 = T2 = 0.3, The restoration agent operates at the optimal ratio fJm/h which assumes a single noise component with the overall mean 0.84 and variance 0.4024. (b) Results of Monte Carlo simulations for the overlaps of selective freezing compared with that of the one-stage dynamics for two-dimensional images generated at the source prior temperature Ts = 2.15. Suppose the restoration agent operates at the optimal ratio of fJm/h which assumes a single noise component. Then there will be a degradation of the quality of the restored images. In the example in Fig. 2(a), the reduction of the overlap Msf for selective freezing is much more modest than the one-stage process (j = 0). Other cases of interest, in which the restoration agent operates on other imprecise estimations, are discussed in [7]. All confirm the robustness of selective freezing. It is interesting to study the more realistic case of two-dimensional images, since we have so far presented analytical results for the mean-field model only. As confirmed by the results for Monte carlo simulations in Fig. 2(b), the overlaps of selective freezing are much more steadier than that of the one-stage dynamics when the decoding temperature changes. This steadiness is most remarkable for a freezing fraction of f = 0.9. 5 Discussions We have introduced a multistage technique for error-correcting codes and image restoration, in which the information extracted from the former stage can be used selectively to improve the performance of the latter one. While the overlap Msf of the selective freezing is bounded by the optimal performance of the one-stage dynamics derived in [3], it has the advantage of being tolerant to uncertainties in hyperparameter estimation. This is confirmed by both analytical and simulational results for mean-field and finite-dimensional models. Improvement is observed both above and below the optimal decoding temperature, superseding the observations in [7]. As an example, we have illustrated its advantage of robustness when the noise distribution is composed of more than one Gaussian components, such as in the case of modern communication channels supporting multimedia applications. Selective freezing can be generalized to more than two stages, in which spins that remain relatively stable in one stage are progressively frozen in the following one. It is expected that the performance can be even more robust. On the other hand, we have a remark about the basic assumption of the cavity method, namely that the addition or removal of a spin causes a small change in the system describable by a perturbative approach. In fact, adding or removing a spin may cause the thermal averages of other spins to change from below to above the thresholds ±& (or vice versa). This change, though often small, induces a nonnegligible change of the thermal averages from fractional values to the frozen values of ±I (or vice versa) in the second stage. The perturbative analysis of these changes is only approximate. The situation is reminiscent of similar instabilities in other disordered systems such as the perceptron, and are equivalent to Almeida-Thouless instabilities in the replica method [9]. A full treatment ofthe problem would require the introduction of a rough energy landscape [9], or the replica symmetry breaking ansatz in the replica method [8]. Nevertheless, previous experiences on disordered systems showed that the corrections made by a more complete treatment may not be too large in the ordered phase. For example, simulational results in Figs. I(b) are close to the corresponding analytical results in [7]. In practical implementations of error-correcting codes, algorithms based on beliefpropagation methods are often employed [10]. It has recently been shown that such decoded messages converge to the solutions of the TAP equations in the corresponding thermodynamic system [11]. Again, the performance of these algorithms are sensitive to the estimation of hyperparameters. We propose that the selective freezing procedure has the potential to make these algorithms more robust. Acknowledgments This work was partially supported by the Research Grant Council of Hong Kong (HKUST6157/99P). References [1] R. J. McEliece, The Theory of Information and Coding, Encyclopedia of Mathematics and its Applications (Addison-Wesley, Reading, MA 1977). [2] S. Geman and D. Geman, IEEE Trans. PAMI 6, 721 (1984). [3] H. Nishimori and K. Y. M. Wong, Phys. Rev. E 60, 132 (1999). [4] D. J. C. Mackay, Neural Computation 4, 415 (1992). [5] J. M. Pryce and A. D. Bruce, J. Phys. A 28, 511 (1995). [6] K. Y. M. Wong, Europhys. Lett. 36, 631 (1996). [7] K. Y. M. Wong and H. Nishimori, submitted to Phys. Rev. E (2000). [8] M. Mezard, G. Parisi, and V.A. Virasoro, Spin Glass Theory and Beyond (World Scientific, Singapore 1987). [9] K. Y. M. Wong, Advances in Neural Information Processing Systems 9, 302 (1997). [10] B. J. Frey, Graphical Models for Machine Learning and Digital Communication (MIT Press, 1998). [11] Y. Kabashima and D. Saad, Europhys. Lett. 44, 668 (1998).
|
2000
|
120
|
1,777
|
The Kernel Gibbs Sampler Thore Graepel Statistics Research Group Computer Science Department Technical University of Berlin Berlin, Germany guru@cs.tu-berlin.de Ralf Herbrich Statistics Research Group Computer Science Department Technical University of Berlin Berlin, Germany ralfh@cs.tu-berlin.de Abstract We present an algorithm that samples the hypothesis space of kernel classifiers. Given a uniform prior over normalised weight vectors and a likelihood based on a model of label noise leads to a piecewise constant posterior that can be sampled by the kernel Gibbs sampler (KGS). The KGS is a Markov Chain Monte Carlo method that chooses a random direction in parameter space and samples from the resulting piecewise constant density along the line chosen. The KGS can be used as an analytical tool for the exploration of Bayesian transduction, Bayes point machines, active learning, and evidence-based model selection on small data sets that are contaminated with label noise. For a simple toy example we demonstrate experimentally how a Bayes point machine based on the KGS outperforms an SVM that is incapable of taking into account label noise. 1 Introduction Two great ideas have dominated recent developments in machine learning: the application of kernel methods and the popularisation of Bayesian inference. Focusing on the task of classification, various connections between the two areas exist: kernels have long been a part of Bayesian inference in the disguise of covariance nmctions that characterise priors over functions [9]. Also, attempts have been made to re-derive the support vector machine (SVM) [1] possibly the most prominent representative of kernel methods as a maximum a-posteriori estimator (MAP) in a Bayesian framework [8]. While this work suggests good strategies for evidencebased model selection the MAP estimator is not truly Bayesian in spirit because it is not based on the concept of model averaging which is crucial to Bayesian reasoning. As a consequence, the MAP estimator is generally not as robust as a real Bayesian estimator. While this drawback is inconsequential in a noise-free setting or in a situation dominated by feature noise, it may have severe consequences when the data is contaminated by label noise that may lead to a multi-modal posterior distribution. In order to make use of the full Bayesian posterior distribution it is necessary to generate samples from this distribution. This contribution is concerned with the generation of samples from the Bayesian posterior over the hypothesis space of linear classifiers in arbitrary kernel spaces in the case of label noise. In contrast to [8] we consider normalised weight vectors, IIwll.~: = 1, because the classification given by a linear classifier only depends on the spatial direction of the weight vector w and not on its length. This point of view leads to a hypothesis space isomorphic to the surface of an n-dimensional sphere which in the absence of prior information is naturally equipped with a uniform prior over directions. Incorporating the label noise model into the likelihood then leads to a piecewise constant posterior on the surface of the sphere. The kernel Gibbs sampler (KGS) is designed to sample from this type of posterior by iteratively choosing a random direction and sampling on the resulting piecewise constant one-dimensional density in the fashion of a hit-and-run algorithm [7]. The resulting samples can be used in various ways: i) In Bayesian transduction [3] the decision about the labels of new test points can be inferred by a majority decision of the sampled classifiers. ii) The posterior mean the Bayes point machine (BPM) solution [4] can be calculated as an approximation to transduction. iii) The binary entropy of candidate training points can be calculated to determine their information content for active learning [2]. iv) The model evidence [5] can be evaluated for the purpose of model selection. We would like to point out, however, that the KGS is limited in practice to a sample size of m ~ 100 and should thus be thought of as an analytical tool to advance our understanding of the interaction of kernel methods and Bayesian reasoning. The paper is structured as follows: in Section 2 we introduce the learning scenario and explain our Bayesian approach to linear classifiers in kernel spaces. The kernel Gibbs sampler is explained in detail in Section 3. Different applications of the KGS are discussed in Section 4 followed by an experimental demonstration of the BPM solution based on using the KGS under label noise conditions. We denote n-tuples by italic bold letters (e.g. x), vectors by roman bold letters (e.g. x), random variables by sans seriffont (e.g. X), and vector spaces by calligraphic capitalised letters (e.g. X). The symbols P, E and I denote a probability measure, the expectation of a random variable and the indicator function, respectively. 2 Bayesian Learning in Kernel spaces We consider learning given a sequence x = (Xl, ... ,Xm ) E xm and y = (Yl, ... Ym) E {-I, + I} m drawn iid from a fixed distribution PXY = Pz over the space X x { -1, + I} = Z of input-output pairs. The hypotheses are linear classifiers X I-t (w,ifJ(x))/C =: (w,x)/C in some fixed feature space K ~ £~ where we assume that a mapping ¢ : X -+ K is chosen a priori 1 . Since all we need for learning is the real-valued output (w, Xi) /C of the classifier w at the m training points in Xl, ... , Xm we can assume that w can be expressed as (see [9]) m W = LOiX;. i=l (1) Thus, it suffices to learn the m expansion coefficients a E IRm rather than the n components of w E K. This is particularly useful if the dimensionality dim (K) = n of the feature space K is much greater (or possibly infinite) than the number m of training points. From (1) we see that all that is needed is the inner product function k (x, x') = (¢ (x) ,¢ (x'))/C also known as the kernel (see [9] for a detailed introduction to the theory of kernels). lFor the sake of convenience, we sometimes abbreviate cfJ {x} by x. This, however, should not be confused with n-tuple x denoting the training objects. (a) (b) Figure 1: Illustration of the (log) posterior distribution on the surface of a 3dimensional sphere {w E Il~a IlIwllK = I} resulting from a label noise model with a label flip rate of q = 0.20 (a) m = 10, (b) m = 1000. The log posterior is plotted over the longitude and latitude, and for small sample size it is multi-modal due to the label noise. The classifier w* labelling the data (before label noise) was at (~, 11"). In a Bayesian spirit we consider a prior Pw over possible weight vectors w E W of unit length, i.e. W = {v E J( IIIvllK = I}. Given an iid training set z = (x,y) and a likelihood model PYlx=x,w=w we obtain the posterior PWlz==z using Bayes' formula () PY=lx==."w=w (y) Pw (w) PWlz=-z w = [ ] ' Ew PY=lx==."w=w (y) (2) By the iid assumption and the independence of the denominator from w we obtain m i=l . .. .c[w,z] In the absence of specific prior knowledge symmetry suggests to take Pw uniform on W. Furthermore, we choose the likelihood model PY1X=x,w=w (Y) = { i _ q if y (w,x)K ::; a otherwise where q specifies the assumed level of label noise. Please note the difference to the commonly assumed model of feature noise which essentially assumes noise in the (mapped) input vectors x instead of the labels y and constitutes the basis of the soft-margin SVM [1]. Thus the likelihood C[w,z] of the weight vector w is given by C [w, z] = qm.Re=p [w,z] (1 _ q)m(l-Re=p [W,Z]) , w here the training error Remp [w, z] is defined as Remp [w,z] = 1 m m L IYi(W,Xi } K:~O. i=l (3) Two data points YIXI and Y2X2 divide the space of normalised weight vectors W into four equivalence classes with different posterior density indicated by the gray shading. In each iteration, starting from Wj_l a random direction v with v..LWj_l is generated. We sample from the piecewise constant density on the great circle determined by the plane defined by Wj-l and v. In order to obtain (*, we calculate the 2m angles (i where the training samples intersect with the circle and keep track of the number m . ei of training errors for each region i. Figure 2: Schematic view of the kernel Gibbs sampling procedure. Clearly, the posterior Pw1z==z is piecewise constant for all W with equal training error Remp [w,z] (see Figure 1). 3 The Kernel Gibbs Sampler In order to sample from PWlz==z on W we suggest a Markov Chain sampling method. For a given value of q, the sampling scheme can be decomposed into the following steps (see Figure 2): 1. Choose an arbitrary starting point Wo E W and set j = O. 2. Choose a direction v E W in the tangent space {v E W I (v, Wj)K = O}. 3. Calculate all m hit points b i E W from W in direction v with the hyperplane having normal YiXi' Before normalisation, this is achieved by [4] (Wj,Xi)K b i = Wj ( ) V. V,Xi K 4. Calculate the 2m angular distances (i from the current position W j Vi E {l, ... ,m}: Vi E {l, ... ,m}: (2i-l = -sign ((v,bi)d arccos ((wj,bi)K) , (2i = ((2i-l + 7r) mod (27r) . 5. Sort the (i in ascending order, i.e. II: {I, ... , 2m} -+ {I, ... , 2m} such that Vi E {2, ... ,2m}: (nCi-l):::; (nCi) . 6. Calculate the training errors ei of the 2m intervals [(nCi-l),(nCi)] byevaluating He [ ((nCHl) - (nCi)) . ((nCHl) - (nCi)) ] ei = mp cos 2 Wj - sm 2 v, z Here, we used the shorthand notation (nC2m+1) = (nCl)' 7. Sample an angle (* using the piecewise uniform distribution and (3). 8. Calculate a new sample Wj+! by Wj+l = cos ((*) Wj - sin ((*) v. 9. Set j f- j + 1 and go back to step 2. Since the algorithm is carried out in feature space K we can use m W = LCtiXi, i=l m v= LViXi, i=l m b = L.BiXi. i=l For the inner products and norms it follows that, e.g. (w, v)K = a'Gv, IIwll~ = a'Ga, where the m x m matrix G is known as the Gram matrix and is given by G ij = (Xi,Xj)K = k(Xi,Xj) . As a consequence the above algorithm can be implemented in arbitrary kernel spaces only making use of k. 4 Applications of the Kernel Gibbs Sampler The kernel Gibbs sampler provides samples from the full posterior distribution over the hypothesis space of linear classifiers in kernel space for the case of label noise. These samples can be used for various tasks related to learning. In the following we will present a selection of these tasks. Bayesian Transduction Given a sample from the posterior distribution over hypotheses, a good strategy for prediction is to let the sampled classifiers vote on each new test data point. This mode of prediction is closest to the Bayesian spirit and has been shown for the zero-noise case to yield excellent generalisation performance [3]. Also the fraction of votes for the majority decision is an excellent indicator for the reliability of the final estimate: Rejection of those test points with the closest decision results in a great reduction of the generalisation error on the remaining test points x. Given the posterior PWlz~=% the transductive decision is BT% (x) = sign (Ewlz~=% [sign ((W,x)x;)J) . (4) In practice, this estimator is approximated by replacing the expectation EWlz~=% by a sum over the sampled weight vectors W j. Bayes Point Machines For classification, Bayesian Transduction requires the whole collection of sampled weight vectors W in memory. Since this may be impractical for large data sets we would like to derive a single classifier W from the Bayesian posterior. An excellent approximation of the transductive decision BT% (x) by a single classifier is obtained by exchanging the expectation with the inner sign-function in (4). Then the classifier hbp is given by (5) where the classifier Wbp is referred to as the Bayes point and has been shown to yield generalisation performance superior to the well-known support vector solution WSVM, which in turn can be looked upon as an approximation to Wbp in the noise-free case [4]. Again, wbp is estimated by replacing the expectation by the mean over samples W j. Note that there exists no SVM equivalence WSVM to the Bayes point Wbp in the case of label noise a fact to be elaborated on in the experimental part in Section 5. q = 0.0 q = 0.1 q = 0.2 Figure 3: A set of 50 samples Wj of the posterior PWlz~=z for various noise levels q. Shown are the resulting decision boundaries in data space X. Active Learning The Bayesian posterior can also be employed to determine the usefulness of candidate training points a task that can be considered as a dual counterpart to Bayesian Transduction. This is particularly useful when the label y of a training point x is more expensive to obtain than the training point x itself. It was shown in the context of "Query by Committee" [2) that the binary entropy S (x,z) = p+ log2P+ + p-Iog2Pwith p± = PWlz~=z (± (W, x) K > 0) is an indicator of the information content of a data point x with regard to the learning task. Samples W j from the Bayesian posterior PWlz~=z make it possible to estimate S for a given candidate training points x and the current training set z to decide on the basis of S if it is worthwhile to query the corresponding label y. Evidence Estimation for Model Selection Bayesian model selection is often based on a quantity called the evidence [5) of the model (given by the denominator of (2)) In the PAC-Bayesian framework this quantity has been demonstrated to be responsible for the generalisation performance of a model [6). It turns out that in the zero-noise case the margin (the quantity maximised by the SVM) is a measure of the evidence of the model used [4). In the case of label noise the KGS serves to estimate this quantity. 5 Experiments In a first experiment we used a surrogate dataset of m = 76 data points x in X = IR2 and the kernel k (x,x') = exp(-t Ilx x'II~). Using the KGS we sampled 50 different classifiers with weight vectors W j for various noise levels q and plotted the resulting decision boundaries {x E IR2 I (w j, x) K = O} in Figure 3 (circles and crosses depict different classes). As can be seen form these plots, increasing the noise level q leads to more diverse classifiers on the training set z. In a second experiment we investigated the generalisation performance of the Bayes point machine (see (5)) in the case of label noise. In IR3 we generated 100 random training and test sets of size mtrain = 100 and mtest = 1000, respectively. For each normalised point x E IR3 the longitude and latitude were sampled from a Beta(5, 5) and Beta(O.l, 0.1) distribution, respectively. The classes y were obtained by randomly flipping the classes assigned by the classifier w* at (~, 7r) (see also Figure 1) with a true label flip rate of q* = 5%. In Figure 4 we plotted the estimated generalisation error for a BPM (trained using 100 samples Wj from the KGS) and Generalisation errors of BPMs (circled error-bars) and soft-margin SVMs (triangled error-bars) vs. assumed noise level q and margin slack penalisation A, respectively. The dataset consisted of m = 100 observations with a label noise of 5% (dotted line) and we used k(x,x') = (x,x')x+A·I"=,,,. Note that the abscissa is jointly used for q and A. Figure 4: Comparison of BPMs and SVMs on data contaminated by label noise. quadratic soft-margin SVM at different label noise levels q and margin slack penalisation A, respectively. Clearly, the BPM with the correct noise model outperformed the SVM irrespective of the chosen level of regularisation. Interestingly, the BPM appears to be quite "robust" w.r.t. the choice of the label noise parameter q. 6 Conclusion and Future Research The kernel Gibbs sampler provides an analytical tool for the exploration of various Bayesian aspects of learning in kernel spaces. It provides a well-founded way for dealing with label noise but suffers from its computational complexity which so far makes it inapplicable for large scale applications. Therefore it will be an interesting topic for future research to invent new sampling schemes that may be able to trade accuracy for speed and would thus be applicable to large data sets. Acknowledgements This work was partially done while RH and TG were visiting Robert C. Williamson at the ANU Canberra. Thanks, Bob, for your great hospitality! References [1] C. Cortes and V. Vapnik. Support Vector Networks. Machine Learning, 20:273- 297, 1995. [2] Y. Freund, H. S. Seung, E. Shamir, and N. Tishby. Selective sampling using the query by committee algorithm. Machine Learning, 28:133- 168, 1997. [3] T. Graepel, R. Herbrich, and K. Obermayer. Bayesian Transduction. In Advances in Neural Information System Processing 12, pages 456-462, 2000. [4] R. Herbrich, T. Graepel, and C. Campbell. Bayesian learning in reproducing kernel Hilbert spaces. Technical report, Technical University of Berlin, 1999. TR 99-1l. [5] D. MacKay. The evidence framework applied to classification networks. Neural Computation, 4(5):720-736, 1992. [6] D. A. McAllester. Some PAC Bayesian theorems. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, pages 230-234, Madison, Wisconsin, 1998. [7] R. M. Neal. Probabilistic inference using Markov chain Monte Carlo methods. Technical report, Dept. of Computer Science, University of Toronto, 1993. CRG-TR-93-l. [8] P. Sollich. Probabilistic methods for Support Vector Machines. In Advances in Neural Information Processing Systems 12, pages 349-355, San Mateo, CA, 2000. Morgan Kaufmann. [9] G. Wahba. Support Vector Machines, Reproducing Kernel Hilbert Spaces and the randomized GACV. Technical report, Department of Statistics, University of Wisconsin, Madison, 1997. TR- NO- 984.
|
2000
|
121
|
1,778
|
Learning Sparse Image Codes using a Wavelet Pyramid Architecture Bruno A. Olshausen Department of Psychology and Center for Neuroscience, UC Davis 1544 Newton Ct. Davis, CA 95616 baolshausen@uedavis.edu Phil Sallee Department of Computer Science UC Davis Davis, CA 95616 sallee@es.uedavis.edu Michael S. Lewicki Department of Computer Science and Center for the Neural Basis of Cognition Carnegie Mellon University Pittsburgh, PA 15213 lewieki@enbe.emu.edu Abstract We show how a wavelet basis may be adapted to best represent natural images in terms of sparse coefficients. The wavelet basis, which may be either complete or overcomplete, is specified by a small number of spatial functions which are repeated across space and combined in a recursive fashion so as to be self-similar across scale. These functions are adapted to minimize the estimated code length under a model that assumes images are composed of a linear superposition of sparse, independent components. When adapted to natural images, the wavelet bases take on different orientations and they evenly tile the orientation domain, in stark contrast to the standard, non-oriented wavelet bases used in image compression. When the basis set is allowed to be overcomplete, it also yields higher coding efficiency than standard wavelet bases. 1 Introduction The general problem we address here is that of learning efficient codes for representing natural images. Our previous work in this area has focussed on learning basis functions that represent images in terms of sparse, independent components [1, 2]. This is done within the context of a linear generative model for images, in which an image I(x,y) is described in terms of a linear superposition of basis functions bi(x,y) with amplitudes ai> plus noise v(x,y): (1) A sparse, factorial prior is imposed upon the coefficients ai, and the basis functions are adapted so as to maximize the average log-probability of images under the model (which is equivalent to minimizing the model's estimate of the code length of the images). When the model is trained on an ensemble of whitened natural images, the basis functions converge to a set of spatially localized, oriented, and bandpass functions that tile the joint space of position and spatial-frequency in a manner similar to a wavelet basis. Similar results have been achieved using other forms of independent components analysis [3, 4]. One of the disadvantages of this approach, from an image coding perspective, is that it may only be applied to small sub-images (e.g., 12 x 12 pixels) extracted from a larger image. Thus, if an image were to be coded using this method, it would need to be blocked and would thus likely introduce blocking artifacts as the result of quantization or sparsification of the coefficients. In addition, the model is unable to capture spatial structure in the images that is larger than the image block, and scaling up the algorithm to significantly larger blocks is computationally intractable. The solution to these problems that we propose here is to assume translation- and scale-invariance among the basis functions, as in a wavelet pyramid architecture. That is, if a basis function is learned at one position and scale, then it is assumed to be repeated at all positions (spaced apart by two positions horizontally and vertically) and scales (in octave increments). Thus, the entire set of basis functions for tiling a large image may be learned by adapting only a handful of parametersi.e., the wavelet filters and the scaling function that is used to expand them across scale. We show here that when a wavelet image model is adapted to natural images to yield coefficients that are sparse and as statistically independent as possible, the wavelet functions converge to a set of oriented functions, and the scaling function converges to a circularly symmetric lowpass filter appropriate for generating selfsimilarity across scale. Moreover, the resulting coefficients achieve higher coding efficiency (higher SNR for a fixed bit-rate) than traditional wavelet bases which are typically designed "by hand" according to certain mathematical desiderata [5]. 2 Wavelet image model The wavelet image model is specified by a relatively small number of parameters, consisting of a set of wavelet functions 'ljJi(X,y), i = l..M, and scaling function ¢(x,y). An image is generated by upsampling and convolving the coefficients at a given band i with 'ljJi (or with ¢ at the lowest-resolution level of the pyramid), followed by successive upsampling and convolution with ¢, depending on their level within the pyramid. The wavelet image model for an L level pyramid is specified mathematically as I(x,y) g(x, y, i) g(x, y, 0) + v(x, y) { aL-l(x,y) l=L-1 II (x, y) l < L - 1 M (2) (3) II(x,y) = [g(x,y,l + 1)t 2] * ¢(x,y) + L: [a~(x,y)t 2J * 'ljJi(X,y) (4) i=l where the coefficients a are indexed by their position, x, y, band, i, and level of resolution l within the pyramid (l = 0 is the highest resolution level) . The symbol Figure 1: Wavelet image model. Shown are the coefficients of the first three levels of a pyramid (l = 0,1,2), with each level split into a number of different bands (i = 1...M). The highest level (l = 3) is not shown and contains only one lowpass band. t 2 denotes upsampling by two and is defined as f(x,y)t2 == { f ( ~, ~) x even & y even o otherwise (5) The wavelet pyramid model is schematically illustrated in figure 1. Thaditional wavelet bases typically utilize three bands (M = 3), in which case the representation is critically sampled (same number of coefficients as image pixels). Here, we shall also examine the cases of M = 4 and 6, in which the representation is overcomplete (more coefficients than image pixels). Because the image model is linear, it may be expressed compactly in vector/matrix notation as I=Ga+v (6) where the vector a is the entire list of coefficient values at all positions, bands, and levels of the pyramid, and the columns of G are the basis functions corresponding to each coefficient, which are parameterized by 'l/J and 41. The probability of generating an image I given a specific state of the coefficients a and assuming Gaussian i.i.d. noise v is then 1 ~I G 12 P(Ila,O) = -e2 1a (7) ZAN where 0 denotes the parameters of the model and includes the wavelet pyramid functions 'l/Ji and 41, as well as the noise variance, a~ = 1/ AN. The prior probability distribution over the coefficients is assumed to be factorial and sparse: P(a) (8) 1 -S(a.) -e ' Zs (9) where S is a non-convex function that shapes P(ai) to have the requisite "sparse" form- i.e., peaked at zero with heavy tails, or positive kurtosis. We choose here S(x) = t3log(1 + (x/a)2), which corresponds to a Cauchy-like prior over the coefficients (an exact Cauchy distribution would be obtained for t3 = 1).1 1 A more optimal choice for the prior would be to use a mixture-of-Gaussians distribution, which better captures the sharp peak at zero characteristic of a sparse representation. But properly maximizing the posterior with such a prior presents formidable challenges [6). 3 Inferring the coefficients The coefficients for a particular image are determined by finding the maximum of the posterior distribution (MAP estimate) a argmax P(all, B) a = argmax P(lla, B)P(aIB) (10) a argmln [A;II_GaI2+ ~S(ai)l (11) A local minimum may be found via gradient descent, yielding the differential equation Ii ex: ANGT e - S(a) e = I-Ga. (12) (13) The computations involving G T e and G a in equations 12 and 13 may be performed quickly and efficiently using fast algorithms for building pyramids and reconstructing from pyramids [7]. 4 Learning Our goal in adapting the wavelet model to natural images is to find the functions 'l/Ji and tjJ that minimize the description length £ of images under the model £ = -(logP(IIB)) P(IIB) = f P(lla, B) P(aIB) da (14) (15) A learning rule for the basis functions may be derived by gradient descent on £: 8£ 8Bi = AN \ (eT ~~ a) P(aII,O) ) (16) Instead of sampling from the full posterior distribution, however, we utilize a simpler approximation in which a single sample is taken at the posterior maximum, and so we have All (AT 8G A) UUi ex: e 8Bi a (17) where e = I - Ga. The price we pay for this approximation, though, is that the basis functions will grow without bound, since the greater their norm, I G k I, the smaller each ak will become, thus decreasing the sparseness penalty in (11). This trivial solution is avoided by adaptively rescaling the basis functions after each learning step so that a target variance on the coefficients is met, as described in an earlier paper [1]. The update rules for 'l/Ji and tjJ are then derived from (17), and may be expressed in terms of the following recursive formulas: ~ 'l/Ji(m,n) = F't/J(e(x,y),m,n,O) (18) F't/J (1, m, n, l) - L f(2x + m, 2y + n) ai(x, y) + F't/J([f * tjJ]{, 2, m, n, l + 1) x,y ~¢(m, n) = F</l(e(x, y), m, n, 0) (19) F</l(f, m,n, l) = L f(2x + m,2y + n) g(x,y,l + 1) + F</l([f *¢l+ 2, m,n, l + 1) x ,y where * denotes cross-correlation and .j.. 2 denotes downsampling by two. These computations may also be performed efficiently using fast algorithms for building and reconstructing from pyramids [7]. 5 Results The image model was trained on a set of 10, pre-whitened 512 x 512 natural images that were used in previous studies [1]. The basis function parameters '¢i and ¢ were represented as 5 x 5 pixel masks, and were initialized to random numbers. For each update, an 80 x 80 subimage was randomly extracted from one of the images, and the coefficients were computed iteratively via (12,13) until the decrease in the energy function was less than 0.1%. The resulting residual, e, was then used for updating the functions '¢i and ¢ according to (18) and (19). The noise parameter AN was set to 400, corresponding to a noise variance that is 2.5% of the image variance (a; = 0.1). At this level of noise, the image reconstructions are visually indistinguishable from the original. The parameters of the prior used were f3 = 2.5, a = 0.3. A stable solution began to emerge after about one hour of training for M=3, and after several hours for M = 6 (Pentium II, 450 MHz). Shown in figure 2 are the basis functions learned for the cases M = 3, 4 and 6, along with a standard bi-orthogonal 9/7 wavelet (FBI fingerprint standard [8]) for comparison. The difference between the learned wavelets and the standard wavelet is striking, in that the learned wavelets tile the orientation domain more evenly. They also exhibit self-similarity in orientation- i.e., they appear to be rotated versions of one another. Increasing the number of bands M from three to four produces narrower orientation tuning, but increasing overcompleteness beyond that point does not, as shown in the tiling diagram of figure 3. All the learned basis function spectra lie well within the Nyquist bounding box in the 2D Fourier plane, matching the power spectrum of the images in the training set. Coding efficiency was evaluated by compressing the sparsified coefficients ii using the embedded wavelet zerotree encoder [9] and measuring the signal-to-noise ratio for a fixed bit rate (SNR = 10 IOglO a; /mse). The results, shown in table 1, demonstrate that the overcomplete bases (M = 4) achieve higher SNR than either of two standard wavelet bases for the same bit rate. Note however that at these levels of SNR the reconstructions are visually identical to the original. At higher compression ratios the learned bases loose their advantage, most likely due to the fact that they are non-orthogonal and hence produce more errors in the reconstruction when the coefficients are quantized. Table 1: Coding efficiency. I basis set I SNR I M = 3 (learned) 11.2 M = 4 (learned) 11.9 Daubechies 6 11.2 FBI 9/7 11.4 basis functions spectra basis functions spectra M=3 (learned) M=4 (learned) M=3 (standard) • M=6 (learned) Figure 2: Basis functions and corresponding power spectra for M = 3, 4 and 6, along with a standard 9/7 biorthogonal wavelet. Each column shows a different band, while each row shows a different level. The lone basis function in the last row is the scaling function (twice convolved with itself). The power spectra are plotted in the 2D-Fourier plane (vertical vs. horizontal spatial-frequency) with the maximum spatial-frequency at the Nyquist rate. M=3 (standard) M=3 (learned) M=4 (learned) M=6 (learned) Figure 3: Frequency domain tiling properties. Shown are iso-power contours at 50% of the maximum for each band and level. 6 Conclusion We have shown in this work how a wavelet basis may be adapted so as to represent the structures in natural images in terms of sparse, independent components. Importantly, the algorithm has the capacity to learn overcomplete basis sets, which are capable of tiling the joint space of position, orientation, and spatial-frequency in a more continuous fashion than traditional, critically sampled basis sets [10]. The overcomplete bases exhibit superior coding efficiency, in the sense of achieving higher SNR for a fixed bit rate. Although the improvements in coding efficiency are modest, we believe the method described here has the potential to yield even greater improvements when adapted to more specific image ensembles such as textures. Acknowledgments This work benefited from extensive use of Eero Simoncelli's Matlab pyramid toolbox. Supported by NIMH R29-MH057921. References [1] Olshausen BA, Field DJ (1997) Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37: 3311-3325. [2] Lewicki MS, Olshausen BA (1999) A probabilistic framework for the adaptation and comparison of image codes, J. Opt. Soc. of Am., A, 16{7}: 1587-160l. [3] Bell AJ, Sejnowski T J (1997) The independent components of natural images are edge filters, Vision Research, 37: 3327-3338. [4] van Hateren JH, van der Schaaff A (1997) Independent component filters of natural images compared with simple cells in primary visual cortex, Proc. Royal Soc. Lond. B, 265: 359-366. [5] Mallat S (1999) A wavelet tour of signal processing. Academic Press. [6] Olshausen BA, Millman KJ (2000). Learning sparse codes with a mixture-of-Gaussians prior. In: Advances in Neuml Information Processing Systems, 12, S.A. Solla, T.K. Leen, K.R. Muller, eds. MIT Press, pp. 841-847. [7] Simoncelli EP, Matlab pyramid toolbox. ftp://ftp.cis.upenn.edu/pub/eero/matlabPyrTools.tar.gz [8] The Bath Wavelet Warehouse http://dmsun4.bath.ac.uk/wavelets/warehouse.html [9] Shapiro JM (1993). Embedded image coding using zerotrees of wavelet coefficients. IEEE Transactions on Signal Processing, 41 {12}: 3445-3462. [10] Simoncelli EP, Freeman WT, Adelson EH, Heeger DJ (1992) Shiftable multi scale transforms, IEEE Transactions on Information Theory, 38{2}: 587-607.
|
2000
|
122
|
1,779
|
Weak Learners and Improved Rates of Convergence in Boosting Shie Mannor and Ron Meir Department of Electrical Engineering Technion, Haifa 32000, Israel {shie,rmeir }@{techunix,ee}.technion.ac.il Abstract The problem of constructing weak classifiers for boosting algorithms is studied. We present an algorithm that produces a linear classifier that is guaranteed to achieve an error better than random guessing for any distribution on the data. While this weak learner is not useful for learning in general, we show that under reasonable conditions on the distribution it yields an effective weak learner for one-dimensional problems. Preliminary simulations suggest that similar behavior can be expected in higher dimensions, a result which is corroborated by some recent theoretical bounds. Additionally, we provide improved convergence rate bounds for the generalization error in situations where the empirical error can be made small, which is exactly the situation that occurs if weak learners with guaranteed performance that is better than random guessing can be established. 1 Introduction The recently introduced boosting approach to classification (e.g., [10]) has been shown to be a highly effective procedure for constructing complex classifiers. Boosting type algorithms have recently been shown [9] to be strongly related to other incremental greedy algorithms (e.g., [6]). Although a great deal of numerical evidence suggests that boosting works very well across a wide spectrum of tasks, it is not a panacea for solving classification problems. In fact, many versions of boosting algorithms currently exist (e.g., [4],[9]), each possessing advantages and disadvantages in terms of classification accuracy, interpretability and ease of implementation. The field of boosting provides two major theoretical results. First, it is shown that in certain situations the training error of the classifier formed converges to zero (see (2)). Moreover, under certain conditions, a positive margin can be guaranteed. Second, bounds are provided for the generalization error of the classifier (see (1)). The main contribution of this paper is twofold. First, we present a simple and efficient algorithm which is shown, for every distribution on the data, to yield a linear classifier with guaranteed error which is smaller than 1/2 'Y where 'Y is strictly positive. This establishes that a weak linear classifier exists. From the theory of boosting [10] it is known that such a condition suffices to guarantee that the training error converges to zero as the number of boosting iterations increases. In fact, the empirical error with a finite margin is shown to converge to zero if , is sufficiently large. However, the existence of a weak learner with error 1/2 - , is not always useful in terms of generalization error, since it applies even to the extreme case where the binary labels are drawn independently at random with equal probability at each point, in which case we cannot expect any generalization. It is then clear that in order to construct useful weak learners, some assumptions need to be made about the data. In this work we show that under certain natural conditions, a useful weak learner can be constructed for one-dimensional problems, in which case the linear hyper-plane degenerates to a point. We speculate that similar results hold for higher dimensional problems, and present some supporting numerical evidence for this. In fact, some very recent results [7] show that this expectation is indeed borne out. The second contribution of our work consists of establishing faster convergence rates for the generalized error bounds introduced recently by Mason et al. [8]. These improved bounds show that faster convergence can be achieved if we allow for convergence to a slightly larger value than in previous bounds. Given the guaranteed convergence of the empirical loss to zero (in the limited situations in which we have proved such a bound), such a result may yield a better trade-off between the terms appearing in the bound, offering a better model selection criterion (see Chapter 15 in [1]). 2 Construction of a Linear Weak Learner We recall the basic generalization bound for convex combinations of classifiers. Let H be a class of binary classifiers of VC-dimension dv , and denote by co(H) the convex hull of H. Given a sample S = {(Xl, Y1), ... , (xm, Ym)} E (X x {-I, +1})m of m examples drawn independently at random from a probability distribution D over X x {-I, +1}, Schapire et al. [10] show that with probability at least 1 15, for every f E co(H) and every () > 0, P DIY f(X} '" 0] '" Ps [Y f(X} '" 8] + 0 ( Jm (d. IOg~m/d.) + Iog(l/5)) 'I') , (1) where the margin-error P slY f(X) :::; ()] denotes the fraction of training points for which yd(Xi) :::; (). Clearly, if the first term can be made small for a large value of the margin (), a tight bound can be established. Schapire et al. [10] also show that if each weak classifier can achieve an error smaller than 1/2 -" then P slY f(X) :::; ()] :::; ((1 - 2,)1-11 (1 + 2,)1+8) T/2 , (2) where T is the number of boosting iterations. Note that if , > (), the bound decreases to zero exponentially fast. It is thus clear that a large value of, is needed in order to guarantee a small value for the margin-error. However, if , (and thus ()) behaves like m -/3 for some (3 > 0, the rate of convergence in the second term in (1) will deteriorate, leading to worse bounds than those available by using standard VC results [11]. What is needed is a characterization of conditions under which the achievable () does not decrease rapidly with m. In this section we present such conditions for one-dimensional problems, and mention recent work [7] that proves a similar result in higher dimensions. We begin by demonstrating that for any distribution on m points, a linear classifier can achieve an error smaller than 1/2 -" where, = O(I/m). In view of our comments above, such a fast convergence of, to zero may be useless for generalization bounds. We then use our construction to show that, under certain regularity conditions, a value of" and thus of (), which is independent of m can be established for one-dimensional problems. Let {Xl, ... , xm} be points in IRd, and denote by {Yl, ... , Ym} their binary labels, i.e., Yi E {-1,+1}. A linear decision rule takes the form y(x) = sgn(a· X + b), where· is the standard inner product in IRd. Let P E ~ m be a probability measure on the m points. The weighted misclassification error for a classifier Y is Pe (a, b) = Lz:,l PiI(Yi i Yi). For technical reasons, we prefer to use the expression 1- 2Pe = Lz:,l PiYiYi. Obviously if 1 - 2Pe ~ E we have that Pe ~ ~ ~. Lemma 1 For any sample of m distinct points, S = {(Xi'Yi)}~l E (IRd X { -1, + 1 } ) m, and a probability measure P E ~ m on S, there is some a E IRd and b E IR such that the weighted misclassification error of the linear classifier Y = sgn(a· X + b) is bounded away from 1/2, in particular Lz:,l PiI(Yi i Yi) :::; ~ 4~· Proof The basic idea of the proof is to project a finite number of points onto a line h so that no two points coincide. Since there is at least one point X whose weight is not smaller than 11m, we consider the four possible linear classifiers defined by h with boundaries near x (at both sides of it and with opposite sign), and show that one of these yields the desired result. We proceed to the detailed proof. Fix a probability vector P = (PI' . .. ' Pm) E ~m. We may assume w.l.o.g that all the Xi are different, or we can merge two elements and get m - 1 points. First, observe that if ILz:,l PiYil 2: 2~' then the problem is trivially solved. To see this, denote by S± the sub-samples of S labelled by ±1 respectively. Assume, for example, that LiES+ Pi 2: LiES_ Pi + 2~· Then the choice a = 0, b = 1, namely Yi = 1 for all i, implies that Li PiYiYi 2: 2~. Similarly, the choice a = 0, b = -1 solves the problem if LiES_ Pi 2: LiES+ Pi + 2~. Thus, we can assume, without loss of generality, that ILz:,l PiYil < 2~· Next, note that there exists a direction u such that i i j implies that U· Xi i U· Xj. This can be seen by the following argument. Construct all one-dimensional lines containing two data points or more; clearly the number of such lines is at most m(m -1)/2. It is then obvious that any line, which is not perpendicular to any of these lines obeys the required condition. Let Xi be a data-point for which Pi 2: 11m, and set E to be a positive number such that 0< E < min{lu·xi -u·xjl: i,j E 1, ... ,m}. Such an E always exists since the points are assumed to be distinct. Note the following trivial algebraic fact: (3) For each j = 1,2, .. . , m let the classification be given by Yj = sgn(u· Xj + b), where the bias b is given by b = -U·Xi +EYi. Then clearly Yi = Yi and Yj = sgn( u·Xj -U·Xi), and therefore Lj PjYjYj = Pi + L#i PjYjsgn(u . Xj - U . Xi). Let A = Pi and B = L#i PjYjsgn(u . Xj - U . Xi). If IA + BI 2: 112m we are done. Otherwise, if IA + BI < 112m, consider the classifier yj = sgn( -u . Xj + b'), with b' = U . Xi + EYi (note that y~ = Yi and yj = -Yj, j i i). Using (3) with 61 = 112m and 62 = 11m the claim follows. • We comment that the upper bound in Lemma 1 may be improved to 1/2 -II (4(m1)), m 2: 2, using a more refined argument. Remark 1 Lemma 1 implies that an error of 1/2 'Y, where 'Y = O(l/m), can be guaranteed for any set of arbitrarily weighted points. It is well known that the problem of finding a linear classifier with minimal classification error is NP-hard (in d) [5]. Moreover, even the problem of approximating the optimal solution is NP-hard [2]. Since the algorithm described in Lemma 1 is clearly polynomial (in m and d), there seems to be a transition as a function of 'Y between the class NP and P (assuming, as usual, that they are different). This issue warrants further investigation. While the result given in Lemma 1 is interesting, its generality precludes its usefulness for bounding generalization error. This can be seen by observing that the theorem guarantees the given margin even in the case where the labels Yi are drawn uniformly at random from {±1}, in which case no generalization can be expected. In order to obtain a more useful result, we need to restrict the complexity of the data distribution. We do this by imposing constraints on the types of decision regions characterizing the data. In order to generate complex, yet tractable, decision regions we consider a multi-linear mapping from Rd to {-I, l}k, generated by the k hyperplanes Pi = {x: WiX + WiO,X E Rd},i = 1, ... ,k, as in the first hidden layer of a neural network. Such a mapping generates a partition of the input space Rd into M connected components, {Rd \ U~=l Pi }, each characterized by a unique binary vector of length k. Assume that the weight vectors (Wi, WiO) E Rd+! are in general position. The number of connected components is given by (e.g., Lemma 3.3. in [1]) C(k, d + 1) = 2 2::~=0 (kil). This number can be bounded from below by 2(k;;I), which in turn is bounded below by 2((k - 1)/d)d. An upper bound is given by 2(e(k - 1)/d)d, m ~ d. In other words, C(k, d + 1) = e ((k/d)d). In order to generate a binary classification problem, we observe that there exists a binary function from {-I, l}k I--t {-I, I}, characterized by these M decision regions. This can be seen as follows. Choose an arbitrary connected component, and label it by +1 (say). Proceed by labelling all its neighbors by -1, where neighbors share a common boundary (a (d -I)-dimensional hyperplane in d dimensions). Proceeding by induction, we generate a binary classification problem composed of exactly M decision regions. Thus, we have constructed a binary classification problem, characterized by at least 2(k;;l) ~ 2((k -1)/d))d decision regions. Clearly as k becomes arbitrarily large, very elaborate regions are formed. We now apply these ideas, together with Lemma 1, to a one dimensional problem. Note that in this case the partition is composed of intervals. Theorem 1 Let F be a class of functions from R to {±1} which partitions the real line into at most k intervals, k ~ 2. Let f.L be an arbitrary probability measure on R. Then for any f E F there exist a, T* E R for which, 1 1 f.L {x: f(x)sgn(ax - T*) = I} ~ 2" + 4k (4) Proof Let a function f be given, and denote its connected components by h, ... , Ik, that is h = [-00, h), h = [h,12), 13 = [12,la), and so on until Ik = [lk-I,OO], with -00 = 10 < h < 12 < ... < lk-l. Associate with every interval a point in R, Xl = h - 1, X2 = (h + l2) /2, ... , Xk-l = (lk-2 + lk-d /2, Xk = lk-1 + 1, a weight f.Li = f.L(Ii), i = 1, ... , k, and a label f(Xi) E {±1}. We now apply Lemma 1 to conclude that there exist a E {±1} and T E R such that 2::7=1 f.Ld(xi)sgn(axi T) > 1/(4k). The value of T lies between li and li+! for some i E {O, 1, ... , k I} (recall that lo = -00). We identify T* of (4) as lH1. This is the case since by choosing this T*, f(x) in any segment Ii is equal to f(Xi) so we have that f.L {x: f(x)sgn(ax - T*) = I} = ~ + 2::7=1 f.Ld(xi)sgn(axi - T*) ~ ~ + 41k· • Note that the result in Theorem 1 is in fact more general than we need, as it applies to arbitrary distributions, rather than distributions over a finite set of points. An open problem at this point is whether a similar result applies to d-dimensional problems. We conjecture that in d dimensions 'Y behaves like k-l(d) for some function l, where k is a measure for the number of homogeneous convex regions defined by the data (a homogeneous region is one in which all points possess identical labels). While we do not have a general proof at this stage, we have recently shown [7] that the conjecture holds under certain natural conditions on the data. This result implies that, at least under appropriate conditions, boosting-like algorithm are expected to have excellent generalization performance. To provide some motivation, we present results of some numerical simulations for two-dimensional problems. For this simulation we used random lines to generate a partition of the unit square in IR2. We then drew 1000 points at random from the unit square and assigned them labels according to the partition. Finally, in order to have a non-trivial problem we made sure that the cumulative weights of each class are equal. We then calculated the optimal linear classifier by exhaustive search. In Figure 1 (b) we show a sample decision region with 93 regions. Figure l(a) shows the dependence of'Y on the number of regions k. As it turns out there is a significant logarithmic dependence between'Y and k, which leads us to conjecture that 'Y '" Ck-l + E for some C, land E. In the presented case it turns out that 1 = 3 turns out to fit our model well. It is important to note, however, that the procedure described above only supports our claim in an average-case, rather than worst-case, setting as is needed. (a) g a rT1rT1 8 as a. func:1:lon of r e gion s n:I 0 .3_ ~ . ~O . 2 l5 .-,o . oeo'----------c~-----c_=_=_--~--=' Number of R e gion s Figure 1: (a) 'Y as a function of the number of regions. (b) A typical complex partition of the unit square used in the simulations. 3 Improved Convergence Rates In Section 2 we proved that under certain conditions a weak learner exists with a sufficiently large margin, and thus the first term in (1) indeed converges to zero. We now analyze the second term in (1) and show that it may be made to converge considerably faster, if the first term is made somewhat larger. First, we briefly recall the framework introduced recently by Mason et al. [8]. These authors begin by introducing the notion of a B-admissible family of functions. For completeness we repeat their definition. Definition 1 (Definition 2 in [8]) A family {C N : N E N} of margin cost functions is B -admissible for B 2: a if for all N E N there is an interval Y C IR of length no more than B and a function W N : [-1, 1] r-+ Y that satisfies sgn( -0:) ::; EZ~QN Jw N(Z)] ::; CN(o:) for all 0: E [-1,1], where EZ~QN.'" (-) denotes the expectation when Z is chosen randomly as Z = (liN) 2:[:1 Zi. and P(Zi = 1) = (1 + 0:)/2. Denote the convex hull of a class H by co(H). The main theoretical result in [8] is the following lemma. Lemma 2 ([8], Theorem 3) For any B-admissible family {CN : N E N} of margin junctions, for any binary hypothesis class of VC dimension dv and any distribution D on X x {-I, + 1 }, with probability at least 1 c5 over a random sample S of m examples drawn at random according to D, every N and every f E co(H) satisfies Pn[yf(x) :::; 0] :::; Es[CN(yf(x)] + EN, where EN = o ([(B 2 N dv logm + log(N/c5))/m] 1/2) . Remark 2 The most appealing feature of Lemma 2, as of other results for convex combinations, is the fact that the bound does not depend on the number of hypotheses from H defining f E co(H), which may in fact be infinite. Using standard VC results (e.g. [11]) would lead to useless bounds, since the VC dimension of these classes is often huge (possibly infinite). Lemma 2 considers binary hypotheses. Since recent works has demonstrated the effectiveness of using real valued hypotheses, we consider the case where the weak classifiers may be confidence-rated, i.e., taking values in [-1,1] rather than {±I}. We first extend Lemma 2 to confidence-rated classifiers. Note that the variables Zi in Definition 1 are no longer binary in this case. Lemma 3 Let the conditions of Lemma 2 hold, except that H is a class of real valued functions from X to [-1, +1] of pseudo-dimension dp . Assume further that WN in Definition 1 obeys a Lipschitz condition of the form IWN(X) - wN(x')1 :::; Llx - x'I for every x, x' EX. Then with probability at least 1- c5, Pn[yf(x) :::; 0] :::; Es[CN(yf(x)] + EN, where EN = 0 ([(LB 2 Ndp logm + log(N/c5))/m] 1/2) . Proof The proof is very similar to the proof of Theorem 2, and will be omitted for the sake of brevity. • It is well known that in the standard setting where CN is replaced by the empirical classification error, improved rates, replacing O( Jlog m/m) by O(log m/m), are possible in two situations: (i) if the minimal value of CN is zero (the restricted model of [1]), and (ii) if the empirical error is replaced by (1 +o:)CN for some 0: > O. The latter case is especially important in a model selection setup, where nested classes of hypothesis functions are considered, since in this case one expects that, with high probability, CN becomes smaller as the classes become more complex. In this situation, case (ii) provides better overall bounds, often leading to the optimal minimax rates for non parametric problems (see a discussion of these issues in Sec. 15.4 of [1]). We now establish a faster convergence rate to a slightly larger value than Es [CN(Yf(X))]. In situations where the latter quantity approaches zero, the overall convergence rate may be improved, as discussed above. We consider cost functions CN(o:), which obey the condition CN(o:) :::; (1 + (3N )1(0: < 0) + 'TJN ((3N > 0, 'TJN > 0). (5) for some positive (3N and 'TJN (see [8] for details on legitimate cost functions). Theorem 2 Let D be a distribution over X x {-I, + I}, and let S be a sample of m points chosen independently at random according to D. Let dp be the pseudodimension of the class H, and assume that CN(O:) obeys condition (5). Then for sufficiently large m, with probability at least 1-c5, every function f E co( H) satisfies the following bound for every 0 < 0: < 1/(3 N ( 1 + ) (d N log m + log! ) Pn [Yf(X):::; 0]:::; 1- O:;N Es [CN(Yf(X))] + 0 p mo:/(:P+ 20:) (j • Proof The proof combines two ideas. First, we use the method of [8] to transform the problem from co(H) to a discrete approximation of it. Then, we use recent results for relative uniform deviations of averages from their means [3]. Due to lack of space, we defer the complete proof to the full version of the paper. 4 Discussion In this paper we have presented two main results pertaining to the theory of boosting. First, we have shown that, under reasonable conditions, an effective weak classifier exists for one dimensional problems. We conjectured, and supported our claim by numerical simulations, that such a result holds for multi-dimensional problems as well. The non-trivial extension of the proof to multiple dimensions can be found in [7]. Second, using recent advances in the theory of uniform convergence and boosting we have presented bounds on the generalization error, which may, under certain conditions, be significantly better than standard bounds, being particularly useful in the context of model selection. Acknowledgment We thank Shai Ben-David and Yoav Freund for helpful discussions. References [1] M. Anthony and P.L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, 1999. [2] P. Bartlett and S. Ben-David. On the hardness of learning with neural networks. In Proceedings of the fourth European Conference on computational Learning Theory, 99. [3] P. Bartlett and G. Lugosi. An inequality for uniform deviations of sample averages from their means. Statistics and Probability Letters, 44:55- 62, 1999. [4] T. Hastie J. Friedman and R. Tibshirani. Additive logistic regression: a statistical view of boosting. The Annals of Statistics, To appear, 2000. [5] D.S. Johnson and F.P. Preparata. The densest hemisphere problem. Theoretical Computer Science, 6:93- 107,1978. [6] S. Mallat and Z. Zhang. Matching pursuit with time-frequencey dictionaries. IEEE Trans. Signal Processing, 41(12):3397- 3415, December 1993. [7] S. Mannor and R. Meir. On the existence of weak learners and applications to boosting. Submitted to Machine Learning [8] L. Mason, P. Bartlett and J. Baxter. Improved generalization through explicit optimization of margins. Machine Learning, 2000. To appear. [9] L. Mason, P. Bartlett, J. Baxter and M. Frean. Functional gradient techniques for combining hypotheses. In B. Sch6lkopf A. Smola, P. Bartlett and D. Schuurmans, editors, Advances in Large Margin Classifiers. MIT Press, 2000. [10] R.E. Schapire, Y. Freund, P. Bartlett and W.S. Lee. Boosting the margin: a new explanation for the effectiveness of voting methods. The Annals of Statistics, 26(5):1651-1686, 1998. [11] V. N. Vapnik. Estimation of Dependences Based on Empirical Data. Springer Verlag, New York, 1982.
|
2000
|
123
|
1,780
|
Keeping flexible active contours on track using Metropolis updates Trausti T. Kristjansson University of Waterloo ttkri stj @uwater l oo . ca Abstract Brendan J. Frey University of Waterloo f r ey@uwater l oo . ca Condensation, a form of likelihood-weighted particle filtering, has been successfully used to infer the shapes of highly constrained "active" contours in video sequences. However, when the contours are highly flexible (e.g. for tracking fingers of a hand), a computationally burdensome number of particles is needed to successfully approximate the contour distribution. We show how the Metropolis algorithm can be used to update a particle set representing a distribution over contours at each frame in a video sequence. We compare this method to condensation using a video sequence that requires highly flexible contours, and show that the new algorithm performs dramatically better that the condensation algorithm. We discuss the incorporation of this method into the "active contour" framework where a shape-subspace is used constrain shape variation. 1 Introduction Tracking objects with flexible shapes in video sequences is currently an important topic in the vision community. Methods include curve fitting [9], layered models [1, 2, 3], Bayesian reconstruction of 3-D models from video[6], and active contour models [10, 14, 15]. Fitting curves to the outlines of objects has been attempted using various methods, including "Snakes" [8, 9], where an energy function is minimized so as to find the best fit. As with other optimization methods, this approach suffers from local maxima. This problem is amplified when using real data where edge noise can prevent the fit of the contour to the desired object outline. In contrast, Blake et at. [10] introduced a probabilistic framework for curve fitting and tracking. Instead of proposing one single best fit for the contour, a probability distribution over contours is found. The distribution is represented as a particle set where each particle represents one contour shape. Inference in these "active contour" models is accomplished using particle filtering. In the "active contour" method, a probabilistic dynamic system is used to model the distribution over the outline of the object (the contour) yt and the observations Zt at time t. Tracking is performed by inference in this model. The outline of an object is tracked through successive frames in a video by using a particle (a) #~., '" " ,J" - '" .. .... -..' H .'IO'III!!I.l' ~"'.~ . . . , ·t~~', ~ . , ~"\tt~. " (b) r ,.):, • ~; -'~! ~rJ" - . Illlll'iIi,W f~'~ ~ ~~ , Figure 1: (a) Condensation with Gaussian dynamics (result for best a = 2 shown) applied to a video sequence. The 200 contours corresponding to 200 particles fail to track the complex outline of the hand. The pictures show every 24th frame of a 211-frame sequence. (b) Metropolis updates with only 12 particles keep the contours on track. At each step, 4 iterations of Metropolis updates are applied with a = 3. distribution. Each particle Xn represents single contour Y 1 that approximates the outline of the object. For any given frame, a set of particles represents the probability distribution over positions and shapes of an object. In order to find the likelihood of an observation Zt, given a particle X n , lines perpendicular to the contour are examined and edges are detected. A variety of distributions can be used to model the likelihood of the edge positions along each line. We assume that the position of the edge belonging to the object is drawn from a Gaussian with mean position at the intersection of the contour and the measurement line Y(Sm) and the positions of the other edges are drawn from a Poisson distribution. The observation likelihood for a single measurement line Zm can be simplified to [10] p(zmlxn) ex: 1 + 1 L exp [_Izm,j - B~sm)xnI2] (1) V21fam lQ j 2aml where Zm,j denotes the coordinates of an edge on measurement line m, and B(sm)xn = Yn(Sm) is the intersection of the contour and the measurement line (see later). Q = q>.. lNotation: We will use Y to refer to a curve, parameterized by x, and yes) for a particular point on the curve. x refers to a particle consisting of subspace parameters, or in our case, control points. n indexes a particle in a particle set, i indexes a component of a particle (i.e. a single control point), m indexes measurement lines and t is used as a frame index where q is the probability of not observing the edge, and A is the rate of the Poisson process. (J'rnl defines the standard deviation in pixels. A multitude of measurement lines is used along the contour, and (assuming independence) the contour likelihood is p(Zlxn) = IIP(ZrnIXn) (2) M where m E M is the set of measurement lines. As mentioned, in the condensation algorithm, a particle set is used to represent the distribution of contours. Starting from an initial distribution, a new distribution for a successive frame is produced by propagating each particle using the system dynamics P(xtlxt-t}. Now the observation likelihood P(Ztlxt) is calculated for each particle, and the particle set is resampled with replacement, using the likelihoods as weights. The resulting set of particles approximates the posterior distribution at time t and is then propagated to the next frame. Figure l(a) shows the results of using condensation with 200 particles. As can be seen, the result is poor. Intuitively, the reason condensation fails is that it is highly unlikely to draw a particle that has raised control points over the four fingers, while keeping the remainder fixed. Figure 1 (b) shows the result of using Metropolis updates and 12 particles (equivalent amount of computation). 2 Keeping contours on track using Metropolis updates To reduce the dimensionality of the inference, a subspace is often used. For example, a fixed shape is only allowed horizontal and vertical translation. Using a subspace reduces the size of the required particle set, allowing for successful tracking using standard condensation. If the object can deform, a subspace that captures the allowed deformations may be used [15]. This increases the flexibility of the contour, but at the cost of enlarged dimensionality. In order to learn such a subspace, a large amount of training samples are used, which are supplied by hand fitting contour shapes to a large number of frames. However, even moderately detailed contours (say, the outline of a hand) will have many control points that interact in complex ways, making subspace modeling difficult or impractical. 2.1 Metropolis sampling Metropolis sampling is a popular Markov Chain Monte Carlo method for problems of large dimensionality[16, 17]. A new particle is drawn from a proposal density Q(X'; Xt) , where in our case, Xt is a particle (i.e. a set of control points) at time t, and x' is a tentative new particle produced by perturbing a subset of the control points. I 1 [ (x' - Xt)2] Qi(X IXt) = J'<\? exp 2 2 . V 27r(J'2 (J' (3) We then calculate (4) where p(Xt IXt-l)p(Zt IXt) is proportional to the posterior probability of observing the contour in that position. If a ~ 1 the proposed particle is accepted. If a < 1, it is accepted with probability a. Since Q is symmetric, the second factor Q(x'; Xt)/Q(Xt; x') = 1. Metropolis sampling can be used in the framework of particle propagation in two ways. It can either be used to fit splines around contours of a training set that is used to construct a shape subspace, e.g. by PCA, or it can also be used to refine the shapes of the subspace to the actual data during tracking. 2.2 B-splines B-splines or basis function splines are parametric curves, defined as follows: Y(s) = B(s)C (5) where Y (s) is a two dimensional vector consisting of the 2-D coordinates of a point on the curve, B(s) is a matrix of polynomial basis functions, and C is a vector of control points. In other words, a point along the curve Y (s) is a weighted sum of the values of the basis functions B(s) for a particular value of s, where the weights are given by the values of C. The basis functions of b-splines have the characteristic that they are non-zero over a limited range of s. Thus a particular control point will only affect a portion of the curve. For regular b-splines of order 4 (the basis functions are 3rd degree polynomials), a single control point will only affect Y (s) over a range of s of length 4. Conversely, for particular Sm (m : Sm E SuppartO !(Xi), where i indexes the component of x that has been altered), Y(Sm) is affected by at most 4 control points (fewer towards the ends). As mentioned before, a detailed contour can have a large number of control points, and thus high dimensionality and so it is common to use a subspace. In this case C can be written as C = W x + Co where W defines a linear subspace and Co is the template of control points, and x represents perturbations from the template in the subspace. In this work we examine unconstrained models, where no prior knowledge about the deformations or dynamics of the object are presumed. In this case W is the identity matrix, Co = 0, and x are the actual coordinates of the control points. This allows the contour to deform in any way. 2.3 Metropolis updates in condensation The new algorithm consists of two steps: a Metropolis step, followed by a resampling step. 1. Iterate over control points: • For one control point at a time, draw a proposal particle by drawing a new control point x~ from a 2-D Gaussian centered at the current control point Xt,i, Eq. (3), and keeping all others unchanged. • Calculate the observation likelihood for the new control point, Eq. (2). • Calculate a (Eq. 4) and reject or accept the new particle 2. Resample 3. Get next image in video If the particle distribution at t - 1 reflects P(xt-lIZl, ... , Zt-t}, the Metropolis updates will converge to P(XtIZl, ... , Zt) [16]. As mentioned above, the affect of altering the position of a control point is to change the shape of the contour locally since the basis functions have limited support. Thus, when evaluating p(x~lxt-t}p(ZtlxD for a proposed particle, we only need to reexamine measurement lines and evaluate p(zm,t Ix~ ,t) for lines in the effected interval and similarly for p(x~,t IXn,t-l). This allows for an efficient algorithm implementation. The computation eM required to update a single particle using metropolis, compared to condensation is eM = o· it . ec where 0 is the order of the b-spline, it is the number of iterations, and ec is the number of computations required to update a particle using condensation. Thus, in the case offourth order splines such as the ones we use, the increase in computation for a single particle is only four for a single iteration, and eight for two iterations. However, we have seen that far fewer particles are required. Figure 2: The behavior of the algorithm with Metropolis updates is shown at frame 100 (t = 100) as a function of iterations and u. The columns, show, from left to right, 1,2,4 and 8 iterations, and the rows, from top to bottom show u = {I, 2, 3, 4}. The rejection ratio (i.e. the ratio of rejected proposal particles to the total number of proposed particles) is shown as a bar on the right side of each image. 3 Results We tested our algorithm on the video sequence shown in Figure 1. The contour had 56 2-D control points i.e a state space of 112 dimensions. Such high dimensionality is required for the detailed contours required to properly outline the fingers of the hand. The results presented are for relatively noise free data, i.e. free from background clutter. This allows us to contrast the performance of using Metropolis updates and standard condensation, for the scenarios of interest, i.e. the learning of subspace models and contour refinement. Figure l(b) shows the results for the Metropolis updates for 12 particles, 4 iterations and u = 3. The figure shows every 24th frame from frame 1 to frame 211. The outline of the splayed fingers is tracked very successfully. Figure l(a) shows every 24th frame for the condensation algorithm of equivalent complexity, using 200 particles and u = 2. This value of u gave the best results for 200 particles. As can be seen, the little finger is tracked moderately well. However the other parts of the hand are very poorly tracked. For lower values of u the contour distribution did not track the hand, but stayed in roughly the position of the initial contour distribution. For higher values of 0', the contour looped around in the general area of the fingers. Figure 2 shows the contour distribution for frame 100 and 12 particles, for different numbers of iterations and values of 0'. When 0' = 1 and 2 the contour distribution does not keep up with the deformation. For 0' = 4 the contour is correctly tracked except for the case of a single iteration. The rejection ratio (i.e. the ratio of rejected proposal particles to the total number of proposed particles) is shown as a bar on the right side of each image. Notice that the general trend is that rejection ratio increases as 0' increases, and decreases as the number of iterations is increased (due to a smaller 0' at each step). Intuitively, it is not surprising that our new algorithm outperforms standard condensation. In the case of condensation, Gaussian noise is added to each control point at each time step. One particle may be correctly positioned for the little finger and poorly positioned for the forefinger, whereas an other particle may be well positioned around the forefinger and poorly positioned around the little finger. In order to track the deformation of the hand, some particles are required that track both the little finger and the forefinger (and all other parts too). In contrast the Metropolis updates are likely to reject particles that are locally worse than the current particle, but accept local improvements. It should be noted that for lower dimensional problems, the increase in tracking performance is not as dramatic. E.g. in the case of tracking a rotating head, using a 12 control point b-spline, the two algorithms performed comparably. 4 Future work and conclusion We are currently examining the effects of background clutter on the performance of the algorithm. We are also investigating other sequences and groupings of control points for generating proposal particles, and ways of using subspace models in combination with Metropolis updates. In this paper we showed how Metropolis updates can be used to keep highly flexible active contours on track, and an efficient implementation strategy was presented. For high dimensional problems which are common for detailed shapes, the new algorithm presented produces dramatically better results than standard condensation. Acknowledgments We thank Andrew Blake and Dale Schuurmans for helpful discussions. References [1] 1. Y. A. Wang and E. H. Adelson "Representing moving images with layers." IEEE Transactions on Image Processing, Special Issue: Image Sequence Compression, vol. 3, no. 5. 1994, pp 625-638 [2] Y. Weiss "Smoothness in layers: Motion segmentation using nonparametric mixture estimation." Proceedings of IEEE conference on Computer Vision and Pattern Recognition, 1997. [3] A. Jepson and M. 1. Black "Mixture models for optical flow computation." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. [4] W. T. Freeman and P. A. Viola "Bayesian model of surface perception." Advances in Neural Information Processing Systems 10, MIT Press, 1998. [5] W. Freeman, E. Pasztor,"Leaming low-level vision," Proceedings of the International Conference on Computer Vision, 1999 pp. 1182-1189 [6] N. R. Howe, M. E. Leventon, W. T. Freeman, "Bayesian Reconstruction of 3D Human Motion from Single-Camera Video To appear in:" Advances in Neural Information Processing Systems 12, edited by S. A. Solla, T. KLeen, and K-R Muller, 2000. TR9937. [7] G. E. Hinton, Z. Ghahramani and Y. W. Teh "Learning to parse images." In S.A. Solla, T. KLeen, and K-R. Muller (eds) Advances in Neural Information Processing Systems 12, MIT Press, 2000 [8] D. Terzopoulos, R. Szeliski, "Tracking with Kalman snakes" In A. Blake and A. Yuille (ed) Active Vision, 3-20. MIT Press, Cambridge, MA, 1992 [9] N. Papanikolopoulos, P. Khosla, T. Kanade "Vision and Control Techniques for robotic visual tracking," In Proc. IEEE Int. Con! Robotics and Autmation 1, 1991, pp. 851 856. [10] A. Blake, M. Isard "Active Contours" Springer-Verlag 1998 ISBN 3540762175 [11] 1. MacCormick, A. Blake "A probabilistic exclusion principle for tracking multiple objects" Proc. 7th IEEE Int. Con! Computer Vision, 1999 [12] M. Isard, A. Blake "ICONDENSATION: Unifying low-level and high-level tracking in a stochastic framework" Proc. 5th European Con! Computer Vision, vol. 1 1998, pp.893-908 [13] 1. Sullivan, A. Blake, M. Isard, 1. MacCormick, "Object Localization by Bayesian Correlation" Proc. Int. Con! Computer Vision, 1999 [14] T. F. Cootes, G. H. Edwards, C. 1. Taylor, "Active Appearance Models" Proceedings of the European conference on Computer Vision, Vol. 2, 1998, pp. 484 - 498 [15] I. Matthews, J. A. Bangham, R. Harvey and S. Cox. Proc. Auditory-Visual Speech Processing (AVSP), 1998 pp. 73-78. [16] R. M. Neal, "Probabilistic Inference Using Markov Chain Monte Carlo Methods", Technical Report CR G-TR -93-1, University of Toronto, 1993 [17] D. J. C MacKay "Introduction to Monte Carlo methods" In M.1. Jordan (ed) Learning in Graphical Models, MIT Press, Cambridge, MA, 1999
|
2000
|
124
|
1,781
|
Data clustering by Markovian relaxation and the Information Bottleneck Method N aft ali Tishby and N oam Slonim School of Computer Science and Engineering and Center for Neural Computation * The Hebrew University, Jerusalem, 91904 Israel email: {tishby.noamm}ees.huji.ae.il Abstract We introduce a new, non-parametric and principled, distance based clustering method. This method combines a pairwise based approach with a vector-quantization method which provide a meaningful interpretation to the resulting clusters. The idea is based on turning the distance matrix into a Markov process and then examine the decay of mutual-information during the relaxation of this process. The clusters emerge as quasi-stable structures during this relaxation, and then are extracted using the information bottleneck method. These clusters capture the information about the initial point of the relaxation in the most effective way. The method can cluster data with no geometric or other bias and makes no assumption about the underlying distribution. 1 Introduction Data clustering is one of the most fundamental pattern recognition problems, with numerous algorithms and applications. Yet, the problem itself is ill-defined: the goal is to find a "reasonable" partition of data points into classes or clusters. What is meant by "reasonable" depends on the application, the representation of the data, and the assumptions about the origins of the data points, among other things. One important class of clustering methods is for cases where the data is given as a matrix of pairwise distances or (dis) similarity measures. Often these distances come from empirical measurement or some complex process, and there is no direct access, or even precise definition, of the distance function. In many cases this distance does not form a metric, or it may even be non-symmetric. Such data does not necessarily come as a sample of some meaningful distribution and even the issue of generalization and sample to sample fluctuations is not well defined. Algorithms that use only the pairwise distances, without explicit use of the distance measure itself, employ statistical mechanics analogies [3] or collective graph theoretical properties [6] , etc. The points are then grouped based on some global criteria, such as connected components, small cuts, or minimum alignment energy. Such algorithms are sometimes computationally inefficient and in most cases it is difficult to interpret the resulting 'Work supported in part by the US-Israel binational science foundation (BSF) and by the Human Frontier Science Project (HFSP). NS is supported by the Levi Eshkol grant. clusters. I.e., it is hard to determine a common property to all the points in one cluster - other than that the clusters "look reasonable" . A second class of clustering methods is represented by the generalized vector quantization (VQ) algorithm. Here one fits a model (e.g. Gaussian distributions) to the points in each cluster, such that an average (known) distortion between the data points and their corresponding representative is minimized. This type of algorithms may rely on theoretical frameworks, such as rate distortion theory, and provide much better interpretation for the resulting clusters. VQ type algorithms can also be more computationally efficient since they require the calculation of distances, or distortion, between the data and the centroid models only, not between every pair of data points. On the other hand, they require the knowledge of the distortion function and thus make specific assumptions about the underlying structure or model of the data. In this paper we present a new, information theoretic combination of pairwise clustering with meaningful and intuitive interpretation for the resulting clusters. In addition, our algorithm provides a clear and objective figure of merit for the clusters - without making any assumption on the origin or structure of the data points. 2 Pairwise distances and Markovian relaxation The first step of our algorithm is to turn the pairwise distance matrix into a Markov process, through the following simple intuition. Assign a state of a Markov chain to each of the data points and transition probabilities between the states/points as a function of their pairwise distances. Thus the data can be considered as a directed graph with the points as nodes and the pairwise distances, which need not be symmetric or form a metric, on the arcs of the graph. Distances are normally considered additive, i.e., the length of a trajectory on the graph is the sum of the arc-lengths. Probabilities, on the other hand, are multiplicative for independent events, so if we want the probability of a (random) trajectory on the graph to be naturally related to its length, the transition probabilities between points should be exponential in their distance. Denoting by d(Xi' Xj) the pairwise distance between the points Xi and Xj, 1 then the transition probability that our Markov chain move from the point Xj at time t to the point X i at time t + 1, Pi,j == P(Xi(t + l)lxj(t)), is chosen as, P(Xi(t + l)lxj(t)) ex: exp(->"d(Xi,Xj)) , (1) where >..-1 is a length scaling factor that equals the mean pairwise distance of the k nearest neighbors to the point Xi. The details of this rescaling are not so important for the final results, and a similar exponentiation of the distances, without our probabilistic interpretation, was performed in other clustering works (see e.g. [3, 6]). A proper normalization of each row is required to turn this matrix into a stochastic transition matrix. Given this transition matrix, one can imagine a random walk starting at every point on the graph. Specifically, the probability distribution of the positions of a random walk, starting at Xj after t time steps, is given by the j-th row of the t -th iteration of the I-step transition matrix. Denoting by pt the t-step transition matrix, pt = (P)t , is indeed the t-th power of the I-step transition probability matrix. The probability of a random walk starting at Xj at time 0, to be at Xi at time t is thus, p(xi(t) IXj(O)) = Ptj . (2) --------------------------1 Henceforth we take the number of data points to be n and the point indices run implicitly from 1 to n unless stated otherwise. If we assume that all the given pairwise distances are finite we obtain in this way an ergodic Markov process with a single stationary distribution, denoted by 7r. This distribution is a right-eigenvector of the t-step transition matrix (for every t), since, 7ri = 2:j Pi,j7rj . It is also the limit distribution of p(Xi (t) IXj (0)) for all j, i.e., limHOOp(xi(t)lxj(O)) = 7ri. During the dynamics of the Markov process any initial state distribution is going to relax to this final stationary distribution and the information about the initial point of a random walk is completely lost. ,,' . ' " ;. " :. Rate of Information loss for example 1 035 ,-----~--~-~-_____, 03 025 '6 ~02 XC 2015 ~ "0 '01 005 10 100,(1) 15 Colon data, rate of Informallon loss vs clusters ac:curac.y ~ ~ o .••• _'" 0 "0". 15 20 Figure 1: On the left shown an example of data, consisting of 150 points in 2D. On the middle, we plot the rate of information loss, d~~t) , during the relaxation. Notice that the algorithm has no prior information about circles or ellipses. The rate of the information loss is slow when the "random walks" stabilize on some sub structures of the data - our proposed clusters. On the right we plot the rate of information loss for the colon cancer data, and the accuracy of the obtained clusters for different relaxation times, with the original classes. 2.1 Relaxation of the mutual information The natural way to quantify the information loss during this relaxation process is by the mutual information between the initial point variable, X(O) = {Xj(O)} and the point of the random walk at time t, X(t) = {Xi(t)}. The mutual information between the random variables X and Y is the symmetric functional of their joint distribution, I(X ;Y) = L P(x,Y)log( ~~~'r\) = L p(x)p(YIX)log(P(Y(lx))) xEX,yEY P P Y xEX,yEY P Y (3) For the Markov relaxation this mutual information is given by, I(t) == I(X(O) ;X(t)) = LPj LP/,jlog Pi> = LPjDKdPt)lp~] , (4) j i Pi j where Pj is the prior probability of the states, and P; = 2:j P;,jPj is the unconditioned probability of Xi at time t. The DKL is the Kulback-Liebler divergence [4], defined as: DKL [Pllq] == 2:y p(y) log ~ which is the information theoretic measure of similarity of distributions. Since all the rows P; j relax to 7r this divergence goes to zero as t --+ 00. While it is clear that the information about the initial point, I(t), decays monotonically (exponentially asymptotically) to zero, the rate of this decay at finite t conveys much information on the structure of the data points. Consider, as a simple example, the planer data points shown in figure 1, with d(Xi,Xj) = (Xi - Xj)2 + (Yi - Yj)2. As can be seen, the rate of information loss about the initial point of the random walk, d~~t ) , while always positive - slows down at specific times during the relaxation. These relaxation locations indicate the formation of quasi-stable structures on the graph. At these relaxation times the transition probability matrix is approximately a projection matrix (satisfying p2t ,:::: pt) where the almost invariant subgraphs correspond to the clusters. These approximate stationary transitions correspond to slow information loss, which can be identified by derivatives of the information loss at time t. Another way to see this phenomena is by observing the rows of pt, which are the conditional distributions p(x;(t)lxj(O)). The rows that are almost indistinguishable, following the partial relaxation, correspond to points Xj with similar conditional distribution on the rest of the graph at time t. Such points should belong to the same structure, or cluster on the graph. This can be seen directly by observing the matrix pt during the relaxation, as shown in figure 2. The quasi-stable structures on the graph, during t~2° t~23 20 20 20 40 40 40 60 60 60 80 80 80 100 100 100 120 120 120 140 140 140 50 100 150 50 100 150 t~28 t= 2 1O 20 40 60 80 100 120 140 50 100 150 50 100 150 50 100 150 Figure 2: The relaxation process as seen directly on the matrix pt, for different times, for the example data of figure 1. The darker colors correspond to higher probability density in every row. Since the points are ordered by the 3 ellipses, 50 in each ellipse, it is easy to see the clear emergence of 3 blocks of conditional distributions - the rows of the matrix - during the relaxation process. For very large t there is complete relaxation and all the rows equal the stationary distribution of the process. The best correlation between the resulting clusters and the original ellipses (i.e., highest "accuracy" value) is obtained for intermediate times, where the underlying structure emerges. the relaxation process, are precisely the desirable meaningful clusters. The remaining question pertains to the correct way to group the initial points into clusters that capture the information about the position on the graph after t-steps. In other words, can we replace the initial point with an initial cluster, that enables prediction of the location on the graph at time t, with similar accuracy? The answer to this question is naturally provided via the recently introduced information bottleneck method [12, 11]. 3 Clusters that preserve information The problem of self-organization of the members of a set X based on the similarity of the conditional distributions of the members of another set, Y , {p(Ylx)} , was first introduced in (9) and was termed "distributional clustering" . This question was recently shown in (12) to be a specific case of a much more fundamental problem: What are the features of the variable X that are relevant to the prediction of another, relevance, variable Y? This general problem was shown to have a natural information theoretic formulation: Find a compressed representation of the variable X, denoted X, such that the mutual information between X and Y, I(X; Y), is as high as possible, under a constraint on the mutual information between X and X, I(X; X) . Surprisingly, this variational principle yields an exact formal solution for the conditional distributions p(ylx), p(xlx), and p(x). This constrained information optimization problem was called in (12) The Information Bottleneck Method. The original approach to the solution of the resulting equations, used already in [9], was based on an analogy with the "deterministic annealing" (DA) approach to clustering (see [10, 8]). This is a top-down hierarchical algorithm that starts from a single cluster and undergoes a cascade of cluster splits which are determined stochastically (as phase transitions) into a "soft" (fuzzy) tree of clusters. We proposed an alternative approach, based on a greedy bottom-up merging, the "Agglomerative Information Bottleneck" (AlB, see [11]), which is simpler and works better than the DA approach in many situations. This algorithm was applied also in the examples given here. 3.1 The information bottleneck method Given any two non-independent random variables, X and Y, the objective of the information bottleneck method is to extract a compact representation of the variable X, denoted here by X, with minimal loss of mutual information to another, relevance, variable Y. More specifically, we want to find a (possibly stochastic) map, p(x lx ), that maximizes the mutual information to the relevance variable I(X;Y) , under a constraint on the (lossy) coding length of X via X, I(X; X). In other words, we want to find an efficient representation of the variable X, X, such that the predictions of Y from X through X will be as close as possible to the direct prediction of Y from X. As shown in [12], by introducing a positive Lagrange multiplier (3 to enforce the mutual information constraint, the problem amounts to maximization of the Lagrangian: £(P(x lx )) = I(X; Y) - (3-1 I(X; X) , (5) with respect to p(x lx), subject to the Markov condition X --+ X --+ Y and normalization. This minimization yields directly the following (self-consistent) equations for the map p(xlx) , and for p(ylx) and p(x): { p(xlx) = ~~~) exp (-(3DKL (P(ylx )llp(ylx))) p(ylx) = 2:x p(Ylx)p(xlx)~ p(x) = 2:xp(x lx )p(x) (6) where Z((3, x) is a normalization function. The familiar Kulback-Liebler divergence, DKL(P(ylx)llp(ylx))' emerges here from the variational principle. These equations can be solved by iterations that are proved to converge for any finite value of (3 (see [12]). The Lagrange multiplier /3 has the natural interpretation of inverse temperature, which suggests deterministic annealing to explore the hierarchy of solutions in X. The variational principle, Eq.(5), determines also the shape of the annealing process, since by changing /3 the mutual informations Ix == I(X; X) and Iy == I(X; Y) vary such that My = /3-1 . <SIx (7) Thus the optimal curve, which is analogous to the rate distortion function in information theory [4], follows a strictly concave curve in the (Ix, Iy) plane. The information bottleneck algorithms provide an information theoretic mechanism for identifying the quasi-stable structures on the graph that form our meaningful clusters. In our clustering application the variables are taken as X = X(O) and Y = X(t) during the relaxation process. 4 Discussion When varying the temperature T = /3-1, the information bottleneck algorithms explore the structure of the data in various resolutions. For very low T, the resolution is high and each point appears in a cluster of its own. For very high T all points are grouped into one cluster. This process resembles the appearance of the structure during the relaxation. However, there is an important difference between these two mechanisms. In the bottleneck algorithms clusters are formed by isotropically blurring the conditional distributions that correspond to each data point. Points are clustered together when these distributions become sufficiently similar. This process is not sensitive to the global topology of the graph representing the data. This can be understood by looking at the example of figure 1. If we consider two diametrically opposing points on one of the ellipses, they will be clustered together only when their blurred distributions overlap. In this example, unfortunately, this happens when the three ellipses are completely indistinguishable. A direct application of the bottleneck to the original transition matrix is therefore bound to fail in this case. In the relaxation process, on the other hand, the distributions are merged through the Markovian dynamics on the graph. In our specific example, two opposing points become similar when they reach the other states with similar probabilities following partial relaxation. This process better preserves the fine structure of the underlying graph, and thus enables finer partitioning of the data. It is thus necessary to combine the two processes. In the first stage, one should relax the Markov process to a quasi-stable point in terms of the rate of information loss. At this point some natural underlying structure emerges, and reflected in the partially relaxed transition matrix, pt. In the second stage we use the information bottleneck algorithm to identify the information preserving clusters. 5 More examples We applied our method to several 'standard' clustering problems and obtained very good results. The first one was the famous "iris data" [7], on which we easily obtained just 5 misclassified points. A more interesting application was obtained on well known gene expression data, the Colon cancer data set provided by Alon et. al [l).This data set consists of 62 tissue samples out of which 22 came from tumors and the rest from "normal" biopsies of colon parts of the same patients. Gene expression levels were given for 2000 genes (oligonucleotides), resulting with a 62 over 2000 matrix. As done in other studies of this data, we calculated the Pearson correlation, Kp (u, v) (see, e.g., [5]), between the u and v expression rows and then transforemed this measure to distances through the simple transformation defined by d(u, v) = ~~~:i~:~l. In figure 1 (right panel) we present the rate of information loss for this data and the accuract of the obtained clusters with the original tissue classes. The emrgence of two clusters at the times of "slow" information loss is clearly seen for t = 24 to 212 iterations. The information bottleneck algorithm, when applied at these relaxation times, discovers the original tissue classes, up to 6 or 7 "misclassified" tissues (see figure). For comparison, seven more sophisticated supervised, techniques applied in [2] to this data. Six of them had 12 misclassified points or more, and their best results had 7 missclasifed tissues. References [1] U. Alon, N. Barkai, D.A. Notterman, K. Gish, D. Mack, and A.J. Levine. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proc. Nat. Acad. Sci. USA, 96:6745-6750, 1999. [2] A. Ben-Dor, L. Bruhn, N. Friedman, 1. Nachman, M. Schummer, and Z. Yakhini. Tissue Classification with Gene Expression Profiles. Journal of Computational Biology, 2000, to appear. [3] M. Blatt, M. Wiesman, and E. Domany. Data clustering using a model granular magnet. Neural Computation 9, 1805-1842, 1997. [4] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, New York, 1991. [5] M. Eisen, P. Spellman, P. Brown and D. Botstein. Cluster analysis and display of genome wide expression patterns. Proc. Nat. Acad. Sci. USA 95, 14863-14868, 1998. [6] Y. Gdalyahu, D. Weinshall, and M. Werman, Randomized algorithm for pairwise clustering. in proceedings of NIPS-11, 424-430, 1998. [7] RA. Fisher. The use of multiple measurements in taxonomic problems Annual Eugenics, 7, Part II, 179-188, 1936. [8] T. Hofmann and J. M. Buhmann. Pairwise data clustering by deterministic annealing. IEEE Transactions on PAMI, 19(1):1-14, 1997. [9] F.C. Pereira, N. Tishby, and L. Lee. Distributional clustering of English words. In 30th Annual Meeting of the Association for Computational Linguistics, Columbus, Ohio, pages 183-190, 1993. [10] K. Rose. Deterministic Annealing for Clustering, Compression, Classification, Regression, and Related Optimization Problems. Pmceedings of the IEEE, 86(11):2210-2239, 1998. [11] N. Slonim and N. Tishby. Agglomerative Information Bottleneck. in proceedings of NIPS-12, 1999. [12] N. Tishby, F.C. Pereira, and W. Bialek. The information bottleneck method. In proceedings of the 37-th Annual Allerton Conference on Communication, Control and Computing, 368-377, 1999.
|
2000
|
125
|
1,782
|
Balancing Multiple Sources of Reward in Reinforcement Learning Christian R. Shelton Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 cshelton@ai.mit.edu Abstract For many problems which would be natural for reinforcement learning, the reward signal is not a single scalar value but has multiple scalar components. Examples of such problems include agents with multiple goals and agents with multiple users. Creating a single reward value by combining the multiple components can throwaway vital information and can lead to incorrect solutions. We describe the multiple reward source problem and discuss the problems with applying traditional reinforcement learning. We then present an new algorithm for finding a solution and results on simulated environments. 1 Introduction In the traditional reinforcement learning framework, the learning agent is given a single scalar value of reward at each time step. The goal is for the agent to optimize the sum of these rewards over time (the return). For many applications, there is more information available. Consider the case of a home entertainment system designed to sense which residents are currently in the room and automatically select a television program to suit their tastes. We might construct the reward signal to be the total number of people paying attention to the system. However, a reward signal of 2 ignores important information about which two users are watching. The users of the system change as people leave and enter the room. We could, in theory, learn the relationship among the users present, who is watching, and the reward. In general, it is better to use the domain knowledge we have instead of requiring the system to learn it. We know which users are contributing to the reward and that only present users can contribute. In other cases, the multiple sources aren't users, but goals. For elevator scheduling we might be trading off people serviced per minute against average waiting time. For financial portfolio managing, we might be weighing profit against risk. In these cases, we may wish to change the weighting over time. In order to keep from having to relearn the solution from scratch each time the weighting is changed, we need to keep track of which rewards to attribute to which goals. There is a separate difficulty if the rewards are not designed functions of the state but rather are given by other agents or people in the environment. Consider the case of the entertainment system above but where every resident has a dial by which they can give the system feedback or reward. The rewards are incomparable. One user may decide to reward the system with values twice as large as those of another which should not result in that user having twice the control over the entertainment. This isn't limited to scalings but also includes any other monotonic transforms of the returns. If the users of the system know they are training it, they will employ all kinds of reward strategies to try to steer the system to the desired behavior [2]. By keeping track of the sources of the rewards, we will derive an algorithm to overcome these difficulties. 1.1 Related Work The work presented here is related to recent work on multiagent reinforcement learning [1,4,5,7] in that multiple rewards signals are present and game theory provides a solution. This work is different in that it attacking a simpler problem where the computation is consolidated on a single agent. Work in multiple goals (see [3, 8] as examples) is also related but assumes either that the returns of the goals are to be linearly combined for an overall value function or that only one goal is to be solved at a time. 1.2 Problem Setup We will be working with partially observable environments with discrete actions and discrete observations. We make no assumptions about the world model and thus do not use belief states. x(t) and a(t) are the observation and action, respectively, at time t. We consider only reactive policies (although the observations could be expanded to include history). 7f(x, a) is the policy or probability the agent will take action a when observing x. At each time step, the agent receives a set of rewards (one for each source in the environment), Ts (t) is the reward at time t from source s. We use the average reward formulation and so R; = limn--->CXl ~E [r·s(1) + Ts(2 ) + ... + Ts(n)I7f] is the expected return from source s for following policy 7f. It is this return that we want to maximize for each source. We will also assume that the algorithm knows the set of sources present at each time step. Sources which are not present provide a constant reward, regardless of the state or action, which we will assume to be zero. All sums over sources will be assumed to be taken over only the present sources. The goal is to produce an algorithm that will produce a policy based on previous experience and the sources present. The agent's experience will take the form of prior interactions with the world. Each experience is a sequence of observations, action, and reward triplets for a particular run of a particular policy. 2 Balancing Multiple Rewards 2.1 Policy Votes If rewards are not directly comparable, we need to find a property of the sources which is comparable and a metric to optimize. We begin by noting that we want to limit the amount of control any given source has over the behavior of the agent. To that end, we construct the policy as the average of a set of votes, one for each source present. The votes for a source must sum to 1 and must all be non-negative (thus giving each source an equal "say" in the agent's policy). We will first consider restricting the rewards from a given source to only affect the votes for that source. The form for the policy is therefore (1) where for each present source 8, Lx as(x) = 1, as (x) ~ ° for all x, La vs (x, a) = 1 for all x, and Vs (x, a) ~ ° for all x and a. We have broken apart the vote from a source into two parts, a and v. as (x) is how much effort source 8 is putting into affecting the policy for observation x. vs (x, a) is the vote by source 8 for the policy for observation x. Mathematically this is the same as constructing a single vote (v~(x, a) = as (x )vs (x, a) , but we find a and v to be more interpretable. We have constrained the total effort and vote anyone source can apply. Unfortunately, these votes are not quite the correct parameters for our policy. They are not invariant to the other sources present. To illustrate this consider the example of a single state with two actions, two sources, and a learning agent with the voting method from above. If 8 1 prefers only a1 and 82 likes an equal mix of a1 and a2, the agent will learn a vote of (1,0) for 81 and 82 can reward the agent to cause it to learn a vote of (0,1) for 82 resulting in a policy of (0.5,0.5). Whether this is the correct final policy depends on the problem definition. However, the real problem arises when we consider what happens if 8 1 is removed. The policy reverts to (0, 1) which is far from 82 'S (the only present source's) desired (0.5,0.5) Clearly, the learned votes for 82 are meaningless when 8 1 is not present. Thus, while the voting scheme does limit the control each present source has over the agent, it does not provide a description of the source's preferences which would allow for the removal or addition (or reweighting) of sources. 2.2 Returns as Preferences While rewards (or returns) are not comparable across sources, they are comparable within a source. In particular, we know that if R;l > R;2 that source 8 prefers policy 'if1 to policy 'if2 . We do not know how to weigh that preference against a different source's preference so an explicit tradeoff is still impossible, but we can limit (using the voting scheme of equation 1) how much one source's preference can override another source's preference. We allow a source's preference for a change to prevail in as much as its votes are sufficient to affect the change in the presences of the other sources' votes. We have a type of a general-sum game (letting the sources be the players of game theory jargon). The value to source 8 ' of the set of all sources' votes is R;, where 'if is the function of the votes defined in equation 1. Each source 8 ' would like to set its particular votes, as, (x) and v ~ (x, a) to maximize its value (or return). Our algorithm will set each source's vote in this way thus insuring that no source could do better by "lying" about its true reward function. In game theory, a "solution" to such a game is called a Nash Equilibrium [6], a point at which each player (source) is playing (voting) its best response to the other players. At a Nash Equilibrium, no single player can change its play and achieve a gain. Because the votes are real-valued, we are looking for the equilibrium of a continuous game. We will derive a fictitious play algorithm to find an equilibrium for this game. 3 Multiple Reward Source Algorithm 3.1 Return Parameterization In order to apply the ideas of the previous section, we must find a method for finding a Nash Equilibrium. To do that, we will pick a parametric form for R; (the estimate of the return): linear in the KL-divergence between a target vote and 1L Letting as. bs, f3s (x), and Ps(x, a) be the parameters of Ii;, An """' () """' ( ) Ps(X, a) Rs =as ~ f3sx ~psx, alog ( ) + bs x a n x, a (2) where as ::::: 0, f3s (x ) ::::: 0, L:x f3s (x) = 1, Ps(x, a) ::::: 0, and L:aps(x, a) = 1. Just as a s (x) was the amount of vote source s was putting towards the policy for observation x, f3s (x ) is the importance for source s of the policy for observation x. And, while Vs (x, a) was the policy vote for observation x for source s, ps(x, a) is the preferred policy for observation x for source s. The constants as and bs allow for scaling and translation of the return. If we let p~(x , a) = asf3s (x)ps(x, a), then, given experiences of different policies and their empirical returns, we can estimate p~( x, a) using linear least-squares. Imposing the constraints just involves finding the normal least-squares fit with the constraint that all p~( x, a) be non-negative. Fromp~(x, a) we can calculate as = L: x, ap~(x, a), f3s (x ) = .1... L:ap~(x, a) and Ps(x, a) = ~p~ (~t) ')' We now have a method for solving for Ii; as a' Ps x,a given experience. We now need to find a way to compute the agent's policy. 3.2 Best Response Algorithm To produce an algorithm for finding a Nash Equilibrium, let us first start by deriving an algorithm for finding the best response for source s to a set of votes. We need to find the set of as (x) and Vs (x, a) that satisfy the constraints on the votes and maximize equation 2 which is the same as minimizing ,,"", f-! ()"""' ( )1 . L:slas/ (x)vs/ (X) ~ fJs X ~ps x , a og '" ( ) L..J s' Qs, X x a (3) over a s (x ) and v s (x, a) for given s because the other terms depend on neither as (x ) nor vs(x , a). To minimize equation 3, let's first fix the a -values and optimize Vs (x, a). We will ignore the non-negative constraints on Vs (x , a) and just impose the constraint that L:a Vs (x , a) = 1. The solution, whose derivation is simple and omitted due to space, is (4) We impose the non-negative constraints by setting to zero any Vs (x, a) which are negative and renormalizing. Unfortunately, we have not been able to find such a nice solution for a s(x ). Instead, we use gradient descent to optimize equation 3 yielding (5) We constrain the gradient to fit the constraints. We can find the best response for source s by iterating between the two steps above. First we initialize as(x) = f3s (x ) for all x. We then solve for a new set of vs(x, a) with equation 4. Using those v-values, we take a step in the direction of the gradient of as(x) with equation 5. We keep repeating until the solution converges (reducing the step size each iteration) which usually only takes a few tens of steps. «K~ J T ,:[ • Bright ' , ' , Sbottom ,:~ T => ':~,., => ::b , , ' , 8 1eft ,~ T ,:L • " , " , ps(5,a) vs(5 , a) 7f(5,a) Figure 1: Load-unload problem: The right is the state diagram, Cargo is loaded in state L Delivery to a boxed state results in reward from the source associated with that state, The left is the solution found, For state 5, from left to right are shown the p-values, the v-values, and the policy, J => 'J,., => Bright '[ ':' , • Sbottom J~,~, ps(5,a) vs(5,a) 7f(5, a) Figure 2: Transfer of the load-unload solution: plots of the same values as in figure 1 but with the left source absent No additional learning was allowed (the left side plots are the same), The votes, however, change, and thus so does the final policy, 3.3 Nash Equilibrium Algorithm To find a Nash Equilibrium, we start with as (x) = f3s (x) and vs(x, a) = Ps(x, a) and iterate to an equilibrium by repeatedly finding the best response for each source and simultaneously replacing the old solution with the new best responses, To prevent oscillation, whenever the change in as (x)vs (x, a) grows from one step to the next, we replace the old solution with one halfway between the old and new solutions and continue the iteration, 4 Example Results In all of these examples we used the same learning scheme. We ran the algorithm for a series of epochs. At each epoch, we calculated 7f using the Nash Equilibrium algorithm. With probability t, we replace 7f with one chosen uniformly over the simplex of conditional distributions. This insures some exploration. We follow 7f for a fixed number of time steps and record the average reward for each source. We add these average rewards and the empirical estimate of the policy followed as data to the least-squares estimate of the returns. We then repeat for the next epoch. 4.1 Multiple Delivery Load-Unload Problem We extend the classic load-unload problem to multiple receivers. The observation state is shown in figure 1. The hidden state is whether the agent is currently carrying cargo. Whenever the agent enters the top state (state 1), cargo is placed on the agent Whenever the agent arrives in any of the boxed states while carrying cargo, the cargo is removed and the agent receives reward. For each boxed state, there is one reward source who only rewards for deliveries to that state (a reward of 1 for a delivery and 0 for all other time steps). In state 5, the agent has the choice of four actions each of which moves the agent to the corresponding state without error. Since the agent cannot observe neither whether it Figure 3: One-way door state diagram: At every state there are two actions (right and left) available to the agent. In states 1,9, 10, and 15 where there are only single outgoing edges, both actions follow the same edge. With probability 0.1, an action will actually follow the other edge. Source 1 rewards entering state 1 whereas source 2 rewards entering state 9. 81 :~I .•...... ~ 82 :1 ....... ~ :~I ~~ .. ~ .... (3s (x) =} (ts(x) =} 81 82 Ps(x, right) Vs (x, right) n(x, right) Figure 4: One-way door solution: from left to right: the sources' ideal policies, the votes, and the final agent's policy. Light bars are for states for which both actions lead to the same state. has cargo nor its history, the optimal policy for state 5 is stochastic. The algorithm set all (t- and {3-values to 0 for states other than state 5. We started f at 0.5 and reduced it to 0.1 by the end of the run. We ran for 300 epochs of 200 iterations by which point the algorithm consistently settled on the solution shown in figure 1. For each source, the algorithm found the best solution of randomly picking between the load state and the source's delivery state (as shown by the p-values). The votes are heavily weighted towards the delivery actions to overcome the other sources' preferences resulting in an approximately uniform policy. The important point is that, without additional learning, the policy can be changed if the left source leaves. The learned (t- and p-values are kept the same, but the Nash Equilibrium is different resulting in the policy in figure 2. 4.2 One-way Door Problem In this case we consider the environment shown in figure 3. From each state the agent can move to the left or right except in states 1, 9, 10, and 15 where there is only one possible action. We can think of states 1 and 9 as one-way doors. Once the agent enters states 1 or 9, it may not pass back through except by going around through state 5. Source 1 gives reward when the agent passes through state 1. Source 2 gives reward when the agent passes through state 9. Actions fail (move in the opposite direction than intended) 0.1 of the time. We ran the learning scheme for 1000 epochs of 100 iterations starting f at 0.5 and reducing it to 0.015 by the last epoch. The algorithm consistently converged to the solution shown in figure 4. Source 1 considers the left-side states (2-5 and 11-12) the most important while source 2 considers the right-side states (5-8 and 13-14) the most important. The ideal policies captured by the p-values show that source 1 wants the agent to move left and source 2 wants the agent to move right for the upper states (2-8) while the sources agree that for the lower states (11-14) the agent should move towards state 5. The votes reflect this preference and agreement. Both sources spend most of their vote on state 5, the state they both feel is important and on which they disagree. The other states (states for which only one source has a strong opinion or on which they agree), they do not need to spend much of their vote. The resulting policy is the natural one: in state 5, the agent randomly picks a direction after which, the agent moves around the chosen loop quickly to return to state 5. Just as in the load-unload problem, if we remove one source, the agent automatically adapts to the ideal policy for the remaining source (with only one source, So, present, 7f(x, a) = P SQ (x, a)). Estimating the optimal policies and then taking the mixture of these two policies would produce a far worse result. For states 2-8, both sources would have differing opinions and the mixture model would produce a uniform policy in those states; the agent would spend most of its time near state 5. Constructing a reward signal that is the sum of the sources' rewards does not lead to a good solution either. The agent will find that circling either the left or right loop is optimal and will have no incentive to ever travel along the other loop. 5 Conclusions It is difficult to conceive of a method for providing a single reward signal that would result in the solution shown in figure 4 and still automatically change when one of the reward sources was removed. The biggest improvement in the algorithm will come from changing the form of the Ii; estimator. For problems in which there is a single best solution, the KL-divergence measure seems to work well. However, we would like to be able to extend the load-unload result to the situation where the agent has a memory bit. In this case, the returns as a function of 7f are bimodal (due to the symmetry in the interpretation of the bit). In general, allowing each source's preference to be modelled in a more complex manner could help extend these results. Acknowledgments We would like to thank Charles Isbell, Tommi Jaakkola, Leslie Kaelbling, Michael Kearns, Satinder Singh, and Peter Stone for their discussions and comments. This report describes research done within CBCL in the Department of Brain and Cognitive Sciences and in the AI Lab at MIT. This research is sponsored by a grants from ONR contracts Nos. NOOOI493-1-3085 & NOO014-95-1-0600, and NSF contracts Nos. IIS-9800032 & DMS-9872936. Additional support was provided by: AT&T, Central Research Institute of Electric Power Industry, Eastman Kodak Company, Daimler-Chrysler, Digital Equipment Corporation, Honda R&D Co., Ltd., NEC Fund, Nippon Telegraph & Telephone, and Siemens Corporate Research, Inc. References [1] 1. Hu and M. P. Wellman. Multiagent reinforcement learning: Theoretical framework and an algorithm. In Froc. of the 15th International Con! on Machine Learning, pages 242- 250, 1998. [2] C. L. Isbell, C. R. Shelton, M. Kearns, S. Singh, and P. Stone. A social reinforcement learning agent. 2000. submitted to Autonomous Agents 2001. [3] 1. Karlsson. Learning to Solve Multiple Goals. PhD thesis, University of Rochester, 1997. [4] M. Kearns, Y. Mansouor, and S. Singh. Fast planning in stochastic games. In Proc. of the 16th Conference on Uncertainty in Artificial Intelligence, 2000. [5] M. L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Proc. of the 11th International Conference on Machine Learning, pages 157-163, 1994. [6] G. Owen. Game Theory. Academic Press, UK, 1995. [7] S. Singh, M. Kearns, and Y. Mansour. Nash convergence of gradient dynamics in general-sum games. In Proc. of the 16th Conference on Uncertainty in Artificial Intelligence, 2000. [8] S. P. Singh. The efficient learning of multiple task sequences. In NIPS, volume 4, 1992.
|
2000
|
126
|
1,783
|
Temporally Dependent Plasticity: An Information Theoretic Account Gal Chechik and N aft ali Tishby School of Computer Science and Engineering and the Interdisciplinary Center for Neural Computation The Hebrew University, Jerusalem, Israel {ggal,tishby}@cs.huji.ac.il Abstract The paradigm of Hebbian learning has recently received a novel interpretation with the discovery of synaptic plasticity that depends on the relative timing of pre and post synaptic spikes. This paper derives a temporally dependent learning rule from the basic principle of mutual information maximization and studies its relation to the experimentally observed plasticity. We find that a supervised spike-dependent learning rule sharing similar structure with the experimentally observed plasticity increases mutual information to a stable near optimal level. Moreover, the analysis reveals how the temporal structure of time-dependent learning rules is determined by the temporal filter applied by neurons over their inputs. These results suggest experimental prediction as to the dependency of the learning rule on neuronal biophysical parameters 1 Introduction Hebbian plasticity, the major paradigm for learning in computational neuroscience, was until a few years ago interpreted as learning by correlated neuronal activity. A series of studies have recently shown that changes in synaptic efficacies highly depend on the relative timing of the pre- and postsynaptic spikes, as the efficacy of a synapse between two excitatory neurons increases when the presynaptic spike precedes the postsynaptic one, but decreases otherwise [1-6]. The magnitude of these synaptic changes decays roughly exponentially as a function of the time difference between pre- and post synaptic spikes, with a time constant of few tens of milliseconds (results vary between studies, especially with regard to the synaptic depression component, compare e.g. [4] and [6]). What could be the computational role of this delicate type of plasticity, sometimes termed spike-timing dependent plasticity (STDP) ? Several authors suggested answers for this question by modeling STDP and studying its effects on synaptic, neural and network dynamics. Importantly, STDP embodies an inherent competition between incoming inputs, and was shown to result in normalization of total incoming synaptic strength [7], maintain the irregularity of neuronal firing [8, 9], ·Work supported in part by a Human Frontier Science Project (HFSP) grant RG 0133/1998. and lead to the emergence of synchronous subpopulation firing in recurrent networks [10]. It may also play an important role in sequence learning [11, 12]. The dynamics of synaptic efficacies under the operation of STDP strongly depends on whether STDP is implemented additively (independent of the baseline synaptic value) or multiplicatively (where the change is proportional to the synaptic efficacy) [13]. This paper takes a different approach to the study of spike-dependent learning rules: while the above studies model STDP and study the model properties, we start by deriving a spike-dependent learning rule from first principles within a simple rate model and then compare it with the experimentally observed STDP. To derive our learning rule, we consider the principle of mutual information maximization. This idea, known as the Infomax principle [14], states that the goal of a neural network's learning procedure is to maximize the mutual information between its output and input. The current paper applies Infomax for a leaky integrator neuron with spiking inputs. The derivation suggests computational insights into the dependence of the temporal characteristics of STDP on biophysical parameters and shows that STDP may serve to maximize mutual information in a network of spiking neurons. 2 The Model We study a network with N input neurons Sl .. SN firing spike trains, and a single output (target) neuron Y. At any point in time, the target neuron accumulates its inputs with some temporal filter F due to voltage attenuation or synaptic transfer function N Y(t) = L WiXi(t) (1) i=l where Wi is the synaptic efficacy between the ith input neuron and the target neuron, Si(t) = L:t . c5(t - tspike) is the i-th spike train and T is the membrane ap&k e time constant. The filter F may be used to consider general synaptic transfer function and voltage decay effects, but is set here as an example to an exponential filter Fr (x) == exp( - x / T). The learning goal is to set the synaptic weights W such that M + 1 uncorrelated patterns of input activity ~1/ ('TJ = O .. M) may be discriminated using the output. Each pattern determines the firing rates of the input neurons, thus S is a noisy realization of ~ due to the stochasticity of the point process. The input patterns are presented for periods of length T (on the order of tens of milliseconds). At each period, a pattern ~1/ is randomly chosen for presentation with probability q1/' where most of the patterns are rare (L:!l q1/ « 1) but ~o is abundant and may be thought of as a background noisy pattern. It should be stressed that in our model information is coded in the non-stationary rates that underlie the input spike trains. As these rates are not observable, any learning must depends on the observable input spikes that realize those underlying rates. 3 Mutual Information Maximization Let us focus on a single presentation period (omitting the notation of t), and look at the value of Y at the end of this period, Y = L:~l WiXi, with Xi == J~T et'/r Si(t')dt'. Denoting by f(Y) the p.d.f. of Y, the input-output mutual information [15] in this network is defined by I(Y; 'TJ) = h(Y) - h(YI'TJ) h(Y) = -i: f(y)log(f(y))dx (2) where h(Y) is the differential entropy of the Y distribution, and h(YI'TJ) is the differential entropy given that the network is presented with a known input pattern. This mutual information measures how easy it is to decide which input pattern 'TJ was presented to the network by observing the network's output Y. To calculate the conditional entropy h(YI'TJ) we use the assumption that input neurons fire independently and their number is large, thus the input of the target neuron when the network is presented with the pattern ~11 is normally distributed f(YI'TJ) = N(J.t11,(7112) with mean J.t11 =< WXl1 > and variance (711 2 =< (W Xl1)(W Xl1)T > - < W Xl1 >2. The brackets denote averaging over the possible realizations of the inputs Xl1 when the network is presented with the pattern ~11. To calculate the entropy of Y we note that f (Y) is a mixture of Gaussians, each resulting from the presentation of an input pattern and use the assumption E!1 ql1 « 1 to approximate the entropy. The details of this derivation are omitted due to space considerations and will be presented elsewhere. Differentiating the mutual information with regard to Wi we obtain with 8I(Yj'TJ) 8Wi M + Lql1 (Cav(y,X:!)K~ + E(X;:)K~) 11=1 M - Lql1 (Cav(y,X?)K~ + E(X?)K~) 11=1 (J.t11-J.tO)2 +(7112 -(702. (704 (704 ' 1 1 1 K =--_. 11 (70 (711 ' (3) K' = J.t11-J.t0 11 (702 . where E(X;:) is the expected value of X;: as averaged over presentation of the ~11 pattern. The general form of this complex gradient is simplified in the following sections together with a discussion of its use for biological learning. The derived gradient may be used for a gradient ascent learning rule by repeatedly calculating the distribution moments J.t11' (711 that depend on W, and updating the weights according to ~ Wi = >. 8~J (Y j 'TJ). This learning rule climbs along the gradient and is bound to converge to a local maximum of the mutual information. Figure lA plots the mutual information during the operation of the learning rule, showing that the network indeed reaches a (possibly local) mutual information maximum. Figure IB depicts the changes in output distribution during learning, showing that it splits into two segregated bumps: one that corresponds to the ~o pattern and another that corresponds to the rest of the patterns. 4 Learning In A Biological System Aiming to obtain a spike-dependent biologically feasible learning rule that maximizes mutual information, we now turn to approximate the analytical rule derived above by a rule that can be implemented in biology. To this end, four steps are taken where each step corresponds to a biological constraint and its solution. First, biological synapses are limited either to excitatory or inhibitory regimes. Since information is believed to be coded in the activity of excitatory neurons, we limit the weights W to positive values. Secondly, the K terms are global functions of weights and input distributions since they depend on the distribution moments J.t11' (71)" To avoid this problem we approximate the learning rule by replacing {K~ , Kg, K~} with constants {>.~, >.g, >'_~l. These constants are set to optimal values, but remain fixed once they are set. We have found numerically that high performance (to be demonstrated in section 5) may be obtained for a wide regime of these constants. A. 0.5 ,----~--~---~-----, 0.4 F.:O.3 ~ ~0 . 2 0.1 1000 2000 3000 4000 time steps B. 0.1 0.09 0.08 >='0.07 ii:' 0.06 ~ == 0.05 .0 ~ 0.04 o c.. 0.03 0.02 0.01 100 o ~~~~--~-~~~~~~-~ - 60 - 20 0 20 40 output value Y Figure 1: Mutual information and output distribution along learning with the gradient ascent learning rule (Eq. 3). All patterns were constructed by setting 10% of the input neurons to fire Poisson spike trains at 40R z, while the rest fire at lOR z. Poisson spike trains were simulated by discretizing time into 1 millisecond bins. Simulation parameters A = 1 M = 100, N = 1000, qo = 0.9, q'f/ = 0.001, T = 20msec. A. Input-output mutual information B. Output distribution after 100,150,200 and 300 learning steps. Outputs segregate into two distinct bumps: one corresponds to the presentation of the ~o pattern and the other corresponds to the rest of the patterns. Thirdly, summation over patterns embodies a 'batch' mode of learning, requiring very large memory to average over multiple presentations. To implement an online learning rule, we replace summation over patterns by pattern-triggered learning. One should note that the analytical derivation yielded that summation in is performed over the rare patterns only (Eq. 3), thus pattern-triggered learning is naturally implemented by restricting learning to presentations of rare patternsl . Fourthly, the learning rule explicitly depends on E(X) and COV(Y, X) that are not observables of the model. We thus replace them by performing stochastic weighted averaging over spikes to yield a spike-dependent learning rule. In the case of inhomogeneous Poisson spike trains where input neurons fire independently, the covariance t t ' - t terms obeys Cau(Y,Xi) = WiEr/ 2(Xi), where Er(X) = Loo e-T-E(S(t'))dt'. The expectations E(Xn may be simply estimated by weighted averaging of the observed spikes Xi that precede the learning moment. Estimating E(XP) is more difficult because, as stated above, learning should be triggered by the rare patterns only. Thus, ~o spikes should have an effect only when a rare pattern ~'f/ is presented. A possible solution is to use the fact that ~o is highly frequent, (and therefore spikes in the vicinity of a ~'f/ presentation are with high probability ~o spikes), to average over spikes following a ~'f/ presentation for background activity estimation. These spikes can be temporally weighted in many ways: from the computational point of view it is beneficial to weigh spikes uniformly along time, but this may require long "memory" and is biologically improbable. We thus refrain from suggesting a specific weighting for background spikes, and obtain the following rule, that is lIn fact, learning rules where learning is also triggered by the presentation of the background pattern explicitly depend on the prior probabilities Q'1 ' and thus are not robust to fluctuations in Q'1. Since such fluctuations strongly reduce the mutual information obtained by these rules, we conclude that pattern-triggered learning should be triggered by the rare pattern only. activated only when one of the rare patterns (f'I, mem = l..M) is presented (4) where h,2(S(t')) denote the temporal weighting of f,,0 spikes. It should be noted that this learning rule uses rare pattern presentations as an external ("supervised") learning signal. The general form of this learning rule and its performance are discussed in the next section. 5 Analyzing The Biologically Feasible Rule 5.1 Comparing performance We have obtained a new spike-dependent learning rule that may be implemented in a biological system that approximates an information maximization learning rule. But how good are these approximations? Does learning with the biologically feasible learning rule increase mutual information? and to what level? The curves in figure 2A compare the mutual information of the learning rule of Eq. 3 with that of Eq. 4, as traced in simulation of the learning process. Apparently, the approximated learning rule achieves fairly good performance compared to the optimal rule, and most of reduction in performance is due to limiting weights to positive values. 5.2 Interpreting the learning rule structure The general form of the learning rule of Eq. 4 is pictorially presented in figure 2B, to allow us to inspect the main features of its structure. First, synaptic potentiation is temporally weighted in a manner that is determined by the same filter F that the neuron applies over its inputs, but learning should apply an average of F and F2 (it F(t - t')S(t')dt' and t F2(t - t')S(t')dt'). The relative weighting of these two components was numerically estimated by simulating the optimal rule of Eq. 3 and was found to be on the same order of magnitude. Second, in our model synaptic depression is targeted at learning the underlying structure of background activity. Our analysis does not restrict the temporal weighting of the depression curve. A major difference between the obtained rule and the experimentally observed learning rule is that in our rule learning is triggered by an external learning signal that corresponds to the presentation of rare patterns, while in the experimentally observed rule learning is triggered by the postsynaptic spike. The possible role of the postsynaptic spike is discussed in the following section. 6 Unsupervised Learning By now we have considered a learning scenario that used external information telling whether the presented pattern is the background pattern or not, to decide whether learning should take place. When such learning signal is missing, it is tempting to use the postsynaptic spike (signaling the presence of an interesting input pattern) as a learning signal. This yields a learning procedure as in Eq. 4 except this time learning is triggered by postsynaptic spikes instead of an external signal. The resulting learning rule is similar to previous models of the experimentally observed STDP as [9, 13, 16]. However, this mechanism will effectively serve learning only if the postsynaptic spikes co-occur with the presentation of a rare pattern. Such co-occurrence may be achieved by supplying short learning signals at the presence of the interesting patterns (e.g. by attentional mechanisms increasing neuronal excitability). This will induce learning such that later postsynaptic spikes will be triggered by the rare pattern presentation. These issues await further investigation. A. 0.0 0.0 ~o.o £. ~o . o ~ 0.0 B. Positive weights limitation Spike dependent rule (Eq.4) 0 '-----~ O .5 ~-~ 1.5 ~2-~ 2.-5 ~3-~ 3.-5 ~4-~ 4 .5 ~5 -50 time steps x 104 possible spike weighting a t(pre)-t(learning) 50 Figure 2: A. Comparing optimal (Eq. 3) and approximated (Eq. 4) learning rules. 10% of the input neurons of CI ('T/ > 0) were set to fire at 40Hz, while the rest fire at 5H z . ~o-neurons fire at 8Hz yielding similar average input as the fTl patterns. Learning rates ratio for Eq. 4 were numerically searched for optimal value, yielding ).'1/ = 0.15,).0 = 0.05 for the arbitrary choice ).' = 0.1. Rest of parameters as in Fig 1 except M = 20, N = 2000. B. A pictorial representation of Eq. 4, plotting ~W as a function of the time difference between the learning signal time t and the input spike time tspike. The potentiation curve (solid line) is the sum of two exponents with constants T and !T (dashed lines). The depression curve is not constrained by our derivation, thus several examples are brought (dot-dashed lines). 7 Discussion In the framework of information maximization, we have derived a spike-dependent learning rule for a leaky integrator neuron. This learning rule achieves near optimal mutual information and can in principle be implemented in biological neurons. The analytical derivation of this rule allows to obtain insight into the learning rules observed experimentally in various preparations. The most fundamental result is that time-dependent learning stems from the timedependency of neuronal output on its inputs. In our model this is embodied in the filter F which a neuron applies over its input spike trains. This filter is determined by the biophysical parameters of the neuron, namely its membrane leak, synaptic transfer functions and dendritic arbor structure. Our model thus yields direct experimental predictions for the way temporal characteristics of the potentiation learning curve are determined by the neuronal biophysical parameters. Namely, cells with larger membrane constants should exhibit longer synaptic potentiation time windows. Interestingly, the time window observed for STDP potentiation indeed fits the time windows of an AMP A channel and is also in agreement with cortical membrane time constants, as predicted by the current analysis [4, 6]. Several features of the theoretically derived rule may have similar functions in the experimentally observed rule: In our model synaptic weakening is targeted to learn the structure of the background activity. Both synaptic depression and potentiation in our model should be triggered by rare pattern presentation to allow near-optimal mutual information. IN addition, synaptic changes should depend on the synaptic baseline value in a sub-linear manner. The experimental results in this regard are still unclear, but theoretical investigations show that this weight dependency has large effect on networks dynamics [13]. While the learning rule presented in Equation 4 assumes independent firing of input neurons, our derivation actually holds for a wider class of inputs. In the case of correlated inputs however, the learning rule involves cross-synaptic terms, which may be difficult to compute by biological neurons. As STDP is highly sensitive to synchronous inputs, it remains a most interesting question to investigate biologicallyfeasible approximations to an Infomax rule for time structured and synchronous inputs. References [1] W .B. Levy and D. Steward. Temporal contiguity requirements for long-term associative potentiatio/depression in the hippocampus. Neuroscience, 8:791- 797, 1983. [2] D. Debanne, B.H. Gahwiler, and S.M. Thompson. Asynchronous pre- and postsynaptic activity induces associative long-term depression in area CAl of the rat hippocampus in vitro. Proc. Natl. Acad. Sci., 91:1148- 1152, 1994. [3] H. Markram, J. Lubke, M. Frotscher, and B. Sakmann. Regulation of synaptic efficacy by coincidence of postsynaptic aps and epsps. Science, 275(5297}:213- 215, 1997. [4] L. Zhang, H.W.Tao, C.E. Holt, W.A. Harris, and M m. Poo. A critical window for cooperation and competition among developing retinotectal synapses. Nature, 395(3}:37- 44, 1998. [5] Q. Bi and M m. Poo. Precise spike timing determines the direction and extent of synaptic modifications in cultured hippocampal neurons. J. Neurosci. , 18:1046410472, 1999. [6] D.E. Feldman. Timing based Jtp and ltd at vertical inputs to layer II/III pryamidal cells in rat barrel cortex. Neuron, 27:45- 56, 2000. [7] R. Kempter, W. Gerstner, and J.L. van Hemmen. Hebbian learning and spiking neurons. Phys. Rev. E., 59(4}:4498- 4514, 1999. [8] L.F. Abbot and S. Song. Temporally asymetric hebbian learning, spike timing and neural respons variability. In S.A. Solla and D.A. Cohen, editors, Advances in Neural Information Processing Systems 11, pages 69- 75. MIT Press, 1999. [9] S. Song, KD. Miller, and L.F. Abbot. Competitive Hebbian learning through spiketiming dependent synaptic plasticity. Nature Neuroscience, pages 919- 926, 2000. [10] D. Horn, N. Levy, I. Meilijson, and E. Ruppin. Distributed synchrony of spiking neurons in a hebbian cell assembly. In S.A. Solla, T.K Leen, and KR. Muller, editors, Advances in Neural Information Processing Systems 12, pages 129- 135, 2000. [11] M.R. Mehta, M. Quirk, and M. Wilson. From hippocampus to v1: Effect of ltp on spatio-temporal dynamics of receptive fields. In J.M. Bower, editor, Computational Neuroscience: Trends in Research 1999. Elsevier, 1999. [12] R. Rao and T. Sejnowski. Predictive sequence learning in recurrent neocortical circuits. In S.A. Solla, T.K Leen, and KR. Muller, editors, Advances in Neural Information Processing Systems 12, pages 164- 170. MIT Press, 2000. [13] J. Rubin, D. Lee, and H. Sompolinski. Equilibrium properties of temporally asymmetric hebbian plasticity. Phys. Rev. D., In press, 2000. [14] R. Linsker. Self-organization in a perceptual network. Computer, 21(3}:105- 117, 1988. [15] C.E. Shannon. A mathematical theory of communication. Bell Syst. Tech. J., 27:379423, 1948. [16] R. Kempter, W. Gerstner, and J.L. van Hemmen. Intrinsic stabilization of output rates by spike-time dependent hebbian learning. Submitted, 2000.
|
2000
|
127
|
1,784
|
Analysis of Bit Error Probability of Direct-Sequence CDMA Multiuser Demodulators Toshiyuki Tanaka Department of Electronics and Information Engineering Tokyo Metropolitan University Hachioji, Tokyo 192-0397, Japan tanaka@eeLmetro-u.ac.jp Abstract We analyze the bit error probability of multiuser demodulators for directsequence binary phase-shift-keying (DSIBPSK) CDMA channel with additive gaussian noise. The problem of multiuser demodulation is cast into the finite-temperature decoding problem, and replica analysis is applied to evaluate the performance of the resulting MPM (Marginal Posterior Mode) demodulators, which include the optimal demodulator and the MAP demodulator as special cases. An approximate implementation of demodulators is proposed using analog-valued Hopfield model as a naive mean-field approximation to the MPM demodulators, and its performance is also evaluated by the replica analysis. Results of the performance evaluation shows effectiveness of the optimal demodulator and the mean-field demodulator compared with the conventional one, especially in the cases of small information bit rate and low noise level. 1 Introduction The CDMA (Code-Division-Multiple-Access) technique [1] is important as a fundamental technology of digital communications systems, such as cellular phones. The important applications include realization of spread-spectrum multipoint-to-point communications systems, in which multiple users share the same communication channel. In the multipoint-topoint system, each user modulates his/her own information bit sequence using a spreading code sequence before transmitting it, and the receiver uses the same spreading code sequence for demodulation to obtain the original information bit sequence. Different users use different spreading code sequences so that the demodulation procedure randomizes and thus suppresses multiple access interference effects of transmitted signal sequences sent from different users. The direct-sequence binary phase-shift-keying (DSIBPSK) [1] is the basic method among various methods realizing CDMA, and a lot of studies have been done on it. Use of Hopfield-type recurrent neural network has been proposed as an implementation of a multiuser demodulator [2]. In this paper, we analyze the bit error probability of the neural multiuser demodulator applied to demodulation of DS/BPSK CDMA channel. Spreading Code Sequences { '7~ } { '7~} ••• {'7~} ~1 ~2------~X~-----+--~ ~N------~ Information Bits Gaussian Noise {Vi} Received Signal {i} Figure 1: DSIBPSK CDMA model 2 DSIBPSK CDMA system We assume that a single Gaussian channel is shared by N users, each of which wishes to transmit his/her own information bit sequence. We also take a simplifying assumption, that all the users are completely synchronized with each other, with respect not only to the chip timing but also to the information bit timing. We focus on any of the time intervals corresponding to the duration of one information bit. Let ~i E {-I, I} be the information bit to be transmitted by user i (i = 1, ... , N) during the time interval, and P be the number of the spreading code chips (clocks) per information bit. For simplicity, the spreading code sequences for the users are assumed to be random bit sequences {'7:; t = 1, ... , P}, where '7:'s are independent and identically distributed (i.i.d.) binary random variables following Prob['7f = ±1] = 1/2. User i modulates the information bit ~i by the spreading code sequence and transmits the modulated sequence {~i '7f; t = 1, ... , P} (with carrier modulation, in actual situations). Assuming that power control [3] is done perfectly so that every transmitted sequences arrive at the receiver with the same intensity, the received signal sequence (after baseband demodulation) is {yl; t = 1, ... , P}, with N l = L'7:~i + Vi, i=1 where Vi ~ N(O, a}) is i.i.d. gaussian noise. This system is illustrated in Fig. l. (1) At the receiver side, one has to estimate the information bits {~i} based on the knowledge of the received signal {i} and the spreading code sequences {'7f} for the users. The demodulator refers to the system which does this task. Accuracy of the estimation depends on what demodulator one uses. Some demodulators are introduced in Sect. 3, and analytical results for their performance is derived in Sect. 4. 3 Demodulators 3.1 Conventional demodulator The conventional demodulator (CD) [1-3] estimates the information bit ~i using the spreading code sequence {11:; t = 1, . .. , P} for the user i, by 1 P hi == N I>l;i. 1= 1 (2) We can rewrite hi as (3) The second and third terms of the right-hand side represent the effects of multiple access interference and noise, respectively. CD would give the correct information bit in the single-user (N = 1), and no noise (Vi == 0) case, but estimation may contain some errors in the multiple-user andlor noisy cases. 3.2 MAP demodulator The accuracy of the estimation would be significantly improved if the demodulator knows the spreading code sequences for all N users and makes full use of them by simultaneously estimating the information bits for aLI the users (the multiuser demodulator). This is the case, for example, for a base station receiving signals from many users. A common approach to the multiuser demodulation is to use the MAP decoding, which estimates the information bits lSi = ~;} by maximizing the posterior probability p({~;}I{ y l}). We call this kind of multiuser demodulator the MAP demodulator 1 . When we assume uniform prior for the information bits, the posterior probability is explicitly given by p(sl{i }) = Z - I exp(-flsH(s»), (4) where (5) fl. == N fa}, s == (Si), h == (hi), and W == (wij) is the sample covariance of the spreading code sequences, P Wij = ~ I>:11j. 1= 1 (6) The problem of MAP demodulation thus reduces to the following minimization problem: A (MAP) ~ = arg min H(s). sE{- I,I}N (7) 3.3 MPM demodulator Although the MAP demodulator is sometimes referred to as "optimal," actually it is not so in terms of the common measure of performance, i.e., the bit error probability Ph, which is IThe MAP demodulator refers to the same one as what is frequently called the "maximumlikelihood (ML) demodulator" in the literature. related to the overlap M == (1/ N) L~l ~i~i between the original information bits {~i} and their estimates {~d as I-M Pb=-2-' (8) The 'MPM (Marginal Posterior Mode [4]) demodulator,' with the inverse temperature /3, is defined as follows: ~i(MPM) = sgn(('\·i},B), (9) where ('},B refers to the average with respect to the distribution P,B(s) = Z(/3)-1 exp( -/3H(s») . (10) Then, we can show that the MPM demodulator with /3 = /3s is the optimal one minimizing the bit error probability Pb. It is a direct consequence of general argument on optimal decoders [5]. Note that the MAP demodulator corresponds to the MPM demodulator in the /3 --* +00 limit (the zero-temperature demodulator). 4 Analysis 4.1 Conventional demodulator In the cases where we can assume that Nand P are both large while a == P / N = 0(1), evaluation of the overlap M, and therefore the bit error probability Pb, for those demodulators are possible. For CD, simple application of the central limit theorem yields where is the error function. 4.2 MPM demodulator M = erf ( a) 2(1+1//3,,) , 2 r 2 erf(x) == .rn 10 e-I dt (11) (2) For the MPM demodulator with inverse temperature /3, we have used the replica analysis to evaluate the bit error probability Pb. Assuming that Nand P are both large while a == P / N = 0(1), and that the macroscopic properties of the demodulator are self-averaging with respect to the randomness of the information bits, of the spreading codes, and of the noise, we evaluate the quenched average of the free energy ((log Z)} in the thermodynamic limit N --* 00, where ((.}) denotes averaging over the information bits and the noise. Evaluation of the overlap M (within replica-symmetric (RS) ansatz) requires solving saddle-point problem for scalar variables {m, q, E, F}. The saddle-point equations are m = f Dz tanh(#z + E), E= __ a_/3 __ 1 + /3(1 - q)' q = f Dz tanh2(#z + E) a/32 [ 1 ] F1-2m[l + /3(1 - q)]2 + q + /3s (3) where Dz == 0/ -J2ir)e- z2 / 2dz is the gaussian measure. The overlap M is then given by M = f Dzsgn(#z + E), (4) from which Pb is evaluated via (8). This is the first main result of this paper. 4.3 MAP demodulator: Zero-temperature limit Taking the zero-temperature limit f3 --+ +00 of the result for the MPM demodulator yields the result for the MAP demodulator. Assuming that q --+ I as f3 --+ +00, while f3 (1 - q) remains finite in this limit, the saddle-point equations reduce to M = m = erf(J 2(2 _ 2: + 1/f3s») (15) It is found numerically, however, that the assumption q --+ I is not valid for small a, so that we have to solve the original saddle-point equations in such cases. 4.4 Optimal demodulator: The case f3 = f3s Letting f3 = f3s in the result for the MPM demodulator gives the optimal demodulator minimizing the bit error probability. In this case, it can be shown that m = q and E = F hold for the solutions of the saddle-point equations (13). 4.5 Demodulator using naive mean-field approximation Since solving the MAP or MPM demodulation problem is in general NP complete, we have to consider approximate implementations of those demodulators which are sub-optimal. A straightforward choice is the mean-field approximation (MFA) demodulator, which uses the analog-valued Hopfield model as the naive mean-field approximation to the finitetemperature demodulation problem2. The solution {mi} of the mean-field equations mi = tanh[f3(- LWijmj +hi)] j (16) gives an approximation to {(.\'i) f! }, from which we have the mean-field approximation to the MPM estimates, as A (MPA) ~i = sgn(mi). (17) The macroscopic properties of the MFA demodulator can be derived by the replica analysis as well, along the line proposed by Bray et al. [6] We have derived the following saddlepoint equations: m = f Dz fez), af3 E=-1 + f3x' x = ~ f Dz zf(z), q = f Dz [f(z)]2 af32 [ I ] F1-2m[l+f3X]2 +q+ f3s ' where fez) is the function defined by fez) = tanh [ flz - Ef(z) + E]. (18) (19) fez) is a single-valued function of z since E is positive. The overlap M is then calculated by M = f Dz sgn(t(z»). (20) This is the second main result of this paper. 2The proposal by Kechriotis and Manolakos [2] is to use the Hopfield model for an approximation to the MAP demodulation. The proposal in this paper goes beyond theirs in that the analog-valued Hopfield model is used to approximate not the MAP demodulator in the zero-temperature limit but the MPM demodulators directly, including the optimal one. >( , 0.01 ...... " ....... 0.01 , , , , , , .'. , , 0.0001 0.0001 , '. , , 0': 0': , , , , , 10-6 10-6 , ... ... \ Opt. -Opt. -, 10-8 MAP . _._----_._ .. , 10-8 MAP --_._._----_ . , .'. .... I , MFA -----MFA -----\ ., CD CD , I '. 10-10 10-10 , 0.1 10 100 0.1 10 100 a a (a) f3s = 1 (b) f3s = 20 Figure 2: Bit error probability for various demodulators. 4.6 AT instability The AT instability [7] refers to the bifurcation of a saddle-point solution without replica symmetry from the replica-symmetric one. In this paper we follow the usual convention and assume that the first such destabilization occurs in the so-called "replicon mode [8]." As the stability condition of the RS saddle-point solution for the MPM demodulator, we obtain a - E2 f Dz sech4(flz + E) = O. (21) For the MFA demodulator, we have a - E2 D 1 - fez) - 0 [ 2]2 f z 1 + E(l - f(z)2) . (22) The RS solution is stable as long as the left-hand side of (21) or (22) is positive. 5 Performance evaluation The saddle-point equations (13) and (18) can be solved numerically to evaluate the bit error probability Pb of the MPM demodulator and its naive mean-field approximation, respectively. We have investigated four demodulators: the optimal one (f3 = f3s), MAP, MFA (with f3 = f3s, i.e., the naive mean-field approximation to the optimal one), and CD. The results are summarized in Fig. 2 (a) and (b) for two cases with f3s = 1 and 20, respectively. Increasing a corresponds to relatively lowering the information bit rate, so that Pb should become small as a gets larger, which is in consistent with the general trend observed in Fig. 2. The optimal demodulator shows consistently better performance than CD, as expected. The MAP demodulator marks almost the same performance as the optimal one (indeed the result of the MAP demodulator is nearly the same as that of the optimal demodulator in the case f3s = 1, so they are indistinguishable from each other in Fig. 2 (a». We also found that the performance of the optimal, MAP, and MFA demodulators is significantly improved in the large-a region when the variance a? of the noise is small relative to N, the number of the users. For example, in order to achieve practical level of bit error probability, Pb '" 10- 5 say, in the f3s = 1 case the optimal and MAP demodulators allow information bit rate 2 times faster than CD does. On the other hand, in the f3s = 20 case they allow information bit rate as much as 20 times faster than CD, which demonstrates that significant process gain is achieved by the optimal and MAP demodulators in such cases. The MFA demodulator with fl = fls showed the performance competitive with the optimal one for the fls = 1 case. Although the MFA demodulator feU behind the optimal and MAP demodulators in the performance for the fls = 20 case, it still had process gain which allows about 10 times faster information bit rate than CD does. Moreover, we observed, using (22), that the RS saddle-point solution for the MFA demodulator with fl = fls was stable with respect to replica symmetry breaking (RSB), and thus RS ansatz was indeed valid for the MFA solution. It suggests that the free energy landscape is rather simple for these cases, making it easier for the MFA demodulator to find a good solution. This argument provides an explanation as to why finite-temperature analog-valued Hopfield models, proposed heuristically by Kechriotis and Manolakos [2], exhibited better performance in their numerical experiments. We also found that the RS saddle-point solution for the optimal demodulator was stable with respect to RSB over the whole range investigated, whereas the solution for the MAP demodulator was found to be unstable. This observation suggests the possibility to construct efficient near-optimal demodulators using advanced mean-field approximations, such as the TAP approach [9, 10]. Acknowledgments This work is supported in part by Grant-in-Aid for Scientific Research from the Ministry of Education, Science, Sports and Culture, Japan. References [1] M. K. Simon, 1. K. Omura, R. A. Scholtz, and B. K. Levitt, Spread Spectrum Communications Handbook, Revised Ed., McGraw-Hill, 1994. [2] G. I. Kechriotis and E. S. Manolakos, "Hopfield neural network implementation of the optimal CDMA multiuser detector," IEEE Trans. Neural Networks, vol. 7, no. 1, pp. 131-141,Jan. 1996. [3] A. J. Viterbi, CDMA: Principles of Spread Spectrum Communication, Addison-Wesley, Reading, Massachusetts, 1995. [4] G. Winkler, Image Analysis, Random Fields and Dynamic Monte Carlo Methods, Springer-Verlag, Berlin, Heidelberg, 1995. [5] Y. Iba, "The Nishimori line and Bayesian statistics," J. Phys. A: Math. Gen., vol. 32, no. 21, pp. 3875-3888, May 1999. [6] A. J. Bray, H. Sompolinsky, and C. Yu, "On the 'naive' mean-field equations for spin glasses," J. Phys. C: Solid State Phys., vol. 19, no. 32, pp. 6389-6406, Nov. 1986. [7] J. R. L. de Almeida and D. 1. Thouless, "Stability of the Sherrington-Kirkpatrick solution ofa spin glass mode," J. Phys. A: Math. Gen., vol. 11, no. 5, pp. 983-990,1978. [8] K. H. Fischer and J. A. Hertz Spin Glasses, Cambridge University Press, Cambridge, 1991. [9] D. J. Thouless, P. W. Anderson, and R. G. Palmer, "Solution of 'Solvable model of a spin glass' ," Phil. Mag., vol. 35, no. 3, pp. 593-601, 1977. [10] Y. Kabashima and D. Saad, "The belief in TAP," in M. S. Keams et al.(eds.), Advances in Neural Information Processing Systems, vol. 11, The MIT Press, pp. 246-252, 1999.
|
2000
|
128
|
1,785
|
A variational mean-field theory for sigmoidal belief networks c. Bhattacharyya Computer Science and Automation Indian Institute of Science Bangalore, India, 560012 cbchiru@csa.iisc.ernet.in S. Sathiya Keerthi Mechanical and Production Engineering National University of Singapore mpessk@guppy.mpe.nus.edu.sg Abstract A variational derivation of Plefka's mean-field theory is presented. This theory is then applied to sigmoidal belief networks with the aid of further approximations. Empirical evaluation on small scale networks show that the proposed approximations are quite competitive. 1 Introduction Application of mean-field theory to solve the problem of inference in Belief Networks(BNs) is well known [1]. In this paper we will discuss a variational mean-field theory and its application to BNs, sigmoidal BNs in particular. We present a variational derivation of the mean-field theory, proposed by Plefka[2]. The theory will be developed for a stochastic system, consistin~ of N binary random variables, Si E {O, I}, described by the energy function E(S), and the following Boltzmann Gibbs distribution at a temperature T: ~ _ eT ""' E(S) P(S) = -z- , z = ~ e-----;y-. S The application of this mean-field method to Boltzmann Machines(BMs) is already done [3]. A large class of BN s are described by the following energy function: N i-l E(S) = - L {Si In f(Mi) + (1 - Si) In(1 - f(Mi)} Mi = L WijSj + hi i=l j=l The application of the mean-field theory for such energy functions is not straightforward and further approximations are needed. We propose a new approximation scheme and discuss its utility for sigmoid networks, which is obtained by substituting 1 f(x) = 1 + eX in the above energy function. The paper is organized as follows. In section 2 we present a variational derivation of Plefka's mean-field theory. In section 3 the theory is extended to sigmoidal belief networks. In section 4 empirical evaluation is done. Concluding remarks are given in section 5. 2 A Variational mean-field theory Plefka,[2] proposed a mean-field theory in the context of spin glasses. This theory can, in principle, yield arbitrarily close approximation to log Z. In this section we present an alternate derivation from a variational viewpoint, see also [4],[5]. Let 'Y be a real parameter that takes values from 0 to 1. Let us define a 'Y dependent partition and distribution function, (1) Note that Zl = Z and Pl = p. Introducing an external real vector, Blet us rewrite (1) as Z ,",e--Y~+2:.(JiS' -"'(JosoZ'Y = L.... e LJi ' , s Z (2) where Z is the partition function associated with the distribution function p-y given by _ E '" e --y~+ 2:i (JiSi Z - '"' e--Y"T+ L.. ° (JiSi P- ---=--L.... ',,.s Z Using Jensen's Inequality, (e- X ) ~ e-(x), we get where Z-y = Z L p-ye - 2:. (JiSi ~ Z e - 2:. (JiUi S Ui = (Si)-P-r Taking logarithms on both sides of (4) we obtain log Z-y ~ log Z - L OiUi (3) (4) (5) (6) The right hand side is defined as a function of u and 'Y via the following assumption. Invertibility assumption: For each fixed u and 'Y, (5) can be solved for if If the invertibility assumption holds then we can use u as the independent vector (with B dependent on u) and rewrite (6) as where G is as defined in G(u,'Y) = -lnZ + LOiUi. i (7) This then gives a variational feel: treat it as an external variable vector and choose it to minimize G for a fixed 'Y. The stationarity conditions of the above minimization problem yield {)G (Ji = = O. ()Ui At the minimum point we have the equality G = - log Z"(. It is difficult to invert (5) for'Y :I 0, thus making it impossible to write an algebraic expression for G for any nonzero 'Y. At 'Y = 0 the inversion is straightforward and one obtains N G(it,O) = 2)Ui In Ui + (1 - Ui) In(l- Ui)) , Po = II ui(1 Ui). ~1 i A Taylor series approach is then undertaken around 'Y = 0 to build an approximation to G. Define _ 'Yk ()kG I GM = G(u,O) + L kT 8k k 'Y ,,(=0 (8) Then G M can be considered as an approximation of G. The stationarity conditions are enforced by setting (Ji = {)G ~ {)GM = O. {)Ui {)Ui In this paper we will restrict ourselves to M = 2. To do this we need to evaluate the following derivatives (9) (10) where For M = 1 we have the standard mean-field approach. The expression for M = 2 can be identified with the TAP correction. The term (10) yields the TAP term for BM energy function. 3 Mean-field approximations for BNs The method, as developed in the previous section, is not directly useful for BNs because of the intractability of the partial derivatives at 'Y = O. To overcome this problem, we suggest an approximation based on Taylor series expansion. Though in this paper we will be restricting ourselves to sigmoid activation function, this method is applicable to other activation functions also. This method enables calculation of all the necessary terms required for extending Plefka's method for BN s. Since, for BN operation T is fixed to 1, T will be dropped from all equations in the rest of the paper. Let us define a new energy function N E((3,S,il,w) = - 2)Silnf(Mi((3)) + (1- Si)ln(I- f(Mi((3))} (11) i=l where 0 ~ (3 ~ 1, i-l i-l Mi((3) = L Wij(3(Sj - Uj) + Mi , Mi = L WijUj + hi j=l j=l where - 'VE+" (J·S· e I Di t 1. Uk = L SkP"(/3 Vk, P"(/3 = ~ 2: .. " - ,,(E+ . (J.S. § 6;e· (12) Since (3 is the important parameter, E((3, S, il, w) will be referred to as E((3) so as to avoid notational clumsiness. We use a Taylor series approximation of E((3) with respect to (3. Let us define ~ ~ e (3k okE I Ec((3) = E(O) + (; kf o(3k /3=0 If Ee approximates E, then we can write E = E(I) ~ Ec(I). Let us now define the following function A(r,(3,il) = _lnL e - "(E+2:i(JiSi + LBiUi ; (13) (14) (15) The Bi are assumed to be functions of il, (3, 'Y, which are obtained by inverting equations (12) By replacing E by Ee in (15) we obtain Ae Ae(r, (3, il) = -In L e - "(Ec+ 2:, (JiS, + L BiUi ; (16) where the definition of il is obtained by replacing E by Ee. In view of (14) one can consider Ae as an approximation to A. This observation suggests an approximation to G. G(r, il) = A(r, 1, il) ~ Ae(r, 1, il) (17) The required terms needed in the Taylor expansion of G in 'Y can be approximated by G(O, il) = A(O, 1, il) = Ac(O, 1, il) okG I Ok A I Ok Ae I o'Yk ,,(=0 = o'Yk ,,(=0,/3=1 ~ o'Yk ,,(=0,/3=1 The biggest advantage in working with Ae rather than G is that the partial derivatives of Ae with respect to 'Y at 'Y = 0 and (3 = 1 can be expressed as functions of il. We define (18) Figure 1: Three layer BN (2 x 4 x 6) with top down propagation of beliefs. The activation function was chosen to be sigmoid. In light of the above discussion one can consider G M :::::i a MC j hence the mean-field equations can be stated as (}i = aG :::::i aGM :::::i aaMc = 0 aUi aUi aUi (19) In this paper we will restrict ourselves to M = 2. The relevant objective functions for a general C is given by All these objective functions can be expressed as a function of u. 4 Experimental results To test the approximation schemes developed in the previous schemes, numerical experiments were conducted. Saul et al.[l] pioneered the application of mean-field theory to BNs. We will refer to their method as the SJJ approach. We compare our schemes with the SJ J approach. Small Networks were chosen so that In Z can be computed by exact enumeration for evaluation purposes. For all the experiments the network topology was fixed to the one shown in figure 1. This choice of the network enables us to compare the results with those of [1]. To compare the performance of our methods with their method we repeated the experiment conducted by them for sigmoid BNs. Ten thousand networks were generated by randomly choosing weight values in [-1,1]. The bottom layer units, or the visible units of each network were instantiated to zero. The likelihood, In Z, was computed by exact enumeration of all the states in the higher two layers. The approximate value of - In Z was computed by a MC j U was computed by solving the fixed point equations obtained from (19). The goodness of approximation scheme was tested by the following measure c = - aMc -1 InZ (22) For a proper comparison we also implemented the SJJ method. The goodness of approximation for the SJ J scheme is evaluated by substituting a MC, in (22) by Lsapprox, for specific formula see [1]. The results are presented in the form of histograms in Figure 2. We also repeated the experiment with weights and (£) (£) small weights [-1, 1] large weights [-5,5] Gu -0.0404 -0.0440 G12 0.0155 0.0231 G22 0.0029 -0.0456 SJJ 0.0157 0.0962 Table 1: Mean of £ for randomly generated sigmoid networks, in different weight ranges. biases taking values between -5 and 5, the results are again presented in the form of histograms in Figure 3. The findings are summarized in the form of means tabulated in Table l. For small weights G12 and the SJJ approach show close results, which was expected. But the improvement achieved by the G22 scheme is remarkable; it gave a mean value of 0.0029 which compares substantially well against the mean value of 0.01139 reported in [6]. The improvement in [6] was achieved by using mixture distribution which requires introduction of extra variational variables; more than 100 extra variational variables are needed for a 5 component mixture. This results in substantial increase in the computation costs. On the other hand the extra computational cost for G22 over G12 is marginal. This makes the G22 scheme computationally attractive over the mixture distribution. " ,\0'" Figure 2: Histograms for GlO and SJJ scheme for weights taking values in [-1,1], for sigmoid networks. The plot on the left show histograms for £ for the schemes Gu and G12 They did not have any overlaps; Gu , gives a mean of -0.040 while G12 gives a mean of 0.0155. The middle plot shows the histogram for the SJJ scheme, mean is given by 0.0157.The plot at the extreme right is for the scheme G22 , having a mean of 0.0029 Of the three schemes G12 is the most robust and also yields reasonably accurate results. It is outperformed only by G22 in the case of sigmoid networks with low weights. Empirical evidence thus suggests that the choice of a scheme is not straightforward and depends on the activation function and also parameter values. Figure 3: Histograms for the G10 and SJJ schemes for weights taking values in [-5,5] for sigmoid networks. The leftmost histogram shows £ for G11 scheme having a mean of -0.0440, second from left is for G12 scheme having a mean of 0.0231, and second from right is for SJJ scheme, having a mean of 0.0962. The scheme G22 is at the extreme right with mean -0.0456. 5 Discussion Application of Plefka's theory to BNs is not straightforward. It requires computation of some averages which are not tractable. We presented a scheme in which the BN energy function is approximated by a Taylor series, which gives a tractable approximation to the terms required for Plefka's method. Various approximation schemes depending on the degree of the Taylor series expansion are derived. Unlike the approach in [1], the schemes discussed here are simpler as they do not introduce extra variational variables. Empirical evaluation on small scale networks shows that the quality of approximations is quite good. For a more detailed discussion of these points see [7]. References [1] Saul, L. K. and Jaakkola, T. and Jordan, M. 1.(1996), Mean field theory for sigmoid belief networks, Journal of Artificial Intelligence Research,4 [2] Plefka, T. (1982), Convergence condition of the TAP equation for the Infinite-ranged Ising glass model,J. Phys. A: Math. Gen.,15 [3] Kappen, H. J and Rodriguez, F. B(1998), Boltzmann machine learning using mean field theory and linear response correction, Advances in Neural Information Processing Systems 10, (eds.) M. I. Jordan and M. J. Kearns and S. A. Solla, MIT press [4] Georges, A. and Yedidia, J. S.(1991), How to expand around mean-field theory using high temperature expansions,J. Phys. A: Math. Gen., 24 [5] Bhattacharyya, C. and Keerthi, S. S.(2000), Information geometry and Plefka's meanfield theory, J. Phys. A: Math. Gen.,33 [6] Bishop, M. C. and Lawrence, N. and Jaakkola, T. and Jordan, M. 1.(1997), Approximating Posterior Distributions in Belief Networks using Mixtures, Advances in Neural Information Processing Systems 10, (eds.) Jordan, M. I. and Kearns, M. J. and Solla, S., MIT press [7] Bhattacharyya, C. and Keerthi, S. S. (1999), Mean field theory for a special class of belief networks, accepted in Journal of Artificial Intelligence Research
|
2000
|
129
|
1,786
|
A Linear Programming Approach to Novelty Detection Colin Campbell Dept. of Engineering Mathematics, Bristol University, Bristol Bristol, BS8 1 TR, United Kingdon C. Campbell@bris.ac.uk Kristin P. Bennett Dept. of Mathematical Sciences Rensselaer Polytechnic Institute Troy, New York 12180-3590 United States bennek@rpi.edu Abstract Novelty detection involves modeling the normal behaviour of a system hence enabling detection of any divergence from normality. It has potential applications in many areas such as detection of machine damage or highlighting abnormal features in medical data. One approach is to build a hypothesis estimating the support of the normal data i.e. constructing a function which is positive in the region where the data is located and negative elsewhere. Recently kernel methods have been proposed for estimating the support of a distribution and they have performed well in practice - training involves solution of a quadratic programming problem. In this paper we propose a simpler kernel method for estimating the support based on linear programming. The method is easy to implement and can learn large datasets rapidly. We demonstrate the method on medical and fault detection datasets. 1 Introduction. An important classification task is the ability to distinguish between new instances similar to members of the training set and all other instances that can occur. For example, we may want to learn the normal running behaviour of a machine and highlight any significant divergence from normality which may indicate onset of damage or faults. This issue is a generic problem in many fields. For example, an abnormal event or feature in medical diagnostic data typically leads to further investigation. Novel events can be highlighted by constructing a real-valued density estimation function. However, here we will consider the simpler task of modelling the support of a data distribution i.e. creating a binary-valued function which is positive in those regions of input space where the data predominantly lies and negative elsewhere. Recently kernel methods have been applied to this problem [4]. In this approach data is implicitly mapped to a high-dimensional space called feature space [13]. Suppose the data points in input space are X i (with i = 1, . . . , m) and the mapping is Xi --+ ¢;(Xi) then in the span of {¢;(Xi)}, we can expand a vector w = Lj cr.j¢;(Xj). Hence we can define separating hyperplanes in feature space by w . ¢;(x;) + b = O. We will refer to w . ¢;(Xi) + b as the margin which will be positive on one side of the separating hyperplane and negative on the other. Thus we can also define a decision function: (1) where z is a new data point. The data appears in the form of an inner product in feature space so we can implicitly define feature space by our choice of kernel function: (2) A number of choices for the kernel are possible, for example, RBF kernels: (3) With the given kernel the decision function is therefore given by: (4) One approach to novelty detection is to find a hypersphere in feature space with a minimal radius R and centre a which contains most of the data: novel test points lie outside the boundary of this hypersphere [3, 12] . This approach to novelty detection was proposed by Tax and Duin [10] and successfully used on real life applications [11] . The effect of outliers is reduced by using slack variables ei to allow for datapoints outside the sphere and the task is to minimise the volume of the sphere and number of datapoints outside i.e. mIll [R2 + oX L i ei 1 s.t. (Xi - a) . (Xi - a) S R2 + ei, ei ~ a (5) Since the data appears in the form of inner products kernel substitution can be applied and the learning task can be reduced to a quadratic programming problem. An alternative approach has been developed by Scholkopf et al. [7]. Suppose we restricted our attention to RBF kernels (3) then the data lies on the surface of a hypersphere in feature space since ¢;(x) . ¢;(x) = K(x , x) = l. The objective is therefore to separate off the surface region constaining data from the region containing no data. This is achieved by constructing a hyperplane which is maximally distant from the origin with all datapoints lying on the opposite side from the origin and such that the margin is positive. The learning task in dual form involves minimisation of: mIll W(cr.) = t L7,'k=l cr.icr.jK(Xi, Xj) s.t. a S cr.i S C, L::1 cr.i = l. (6) However, the origin plays a special role in this model. As the authors point out [9] this is a disadvantage since the origin effectively acts as a prior for where the class of abnormal instances is assumed to lie. In this paper we avoid this problem: rather than repelling the hyperplane away from an arbitrary point outside the data distribution we instead try and attract the hyperplane towards the centre of the data distribution. In this paper we will outline a new algorithm for novelty detection which can be easily implemented using linear programming (LP) techniques. As we illustrate in section 3 it performs well in practice on datasets involving the detection of abnormalities in medical data and fault detection in condition monitoring. 2 The Algorithm For the hard margin case (see Figure 1) the objective is to find a surface in input space which wraps around the data clusters: anything outside this surface is viewed as abnormal. This surface is defined as the level set, J(z) = 0, of some nonlinear function. In feature space, J(z) = L; O'.;K(z, x;) + b, this corresponds to a hyperplane which is pulled onto the mapped datapoints with the restriction that the margin always remains positive or zero. We make the fit of this nonlinear function or hyperplane as tight as possible by minimizing the mean value of the output of the function, i.e., Li J(x;). This is achieved by minimising: subject to: m LO'.jK(x;,Xj) + b 2:: 0 j=l m L 0'.; = 1, 0'.; 2:: 0 ;=1 (7) (8) (9) The bias b is just treated as an additional parameter in the minimisation process though unrestricted in sign. The added constraints (9) on 0'. bound the class of models to be considered - we don't want to consider simple linear rescalings of the model. These constraints amount to a choice of scale for the weight vector normal to the hyperplane in feature space and hence do not impose a restriction on the model. Also, these constraints ensure that the problem is well-posed and that an optimal solution with 0'. i- 0 exists. Other constraints on the class of functions are possible, e.g. 110'.111 = 1 with no restriction on the sign of O'.i. Many real-life datasets contain noise and outliers. To handle these we can introduce a soft margin in analogy to the usual approach used with support vector machines. In this case we minimise: (10) subject to: m LO:jJ{(Xi , Xj)+b~-ei' ei~O (11) j=l and constraints (9). The parameter). controls the extent of margin errors (larger ). means fewer outliers are ignored: ). -+ 00 corresponds to the hard margin limit). The above problem can be easily solved for problems with thousands of points using standard simplex or interior point algorithms for linear programming. With the addition of column generation techniques, these same approaches can be adopted for very large problems in which the kernel matrix exceeds the capacity of main memory. Column generation algorithms incrementally add and drop columns each corresponding to a single kernel function until optimality is reached. Such approaches have been successfully applied to other support vector problems [6, 2]. Basic simplex algorithms were sufficient for the problems considered in this paper, so we defer a listing of the code for column generation to a later paper together with experiments on large datasets [1]. 3 Experiments Artificial datasets. Before considering experiments on real-life data we will first illustrate the performance of the algorithm on some artificial datasets. In Figure 1 the algorithm places a boundary around two data clusters in input space: a hard margin was used with RBF kernels and (J" = 0.2. In Figure 2 four outliers lying outside a single cluster are ignored when the system is trained using a soft margin. In Figure 3 we show the effect of using a modified RBF kernel J{ (Xi, Xj) = e- ix,-xji/2,,2. This kernel and the one in (3) use a measure X - y, thus J{(x, x) is constant and the points lie on the surface of a hypersphere in feature space. As a consequence a hyperplane slicing through this hypersphere gives a closed boundary separating normal and abnormal in input space: however, we found other choices of kernels may not produce closed boundaries in input space. 01 .. -.~. ·02 ·03 '--~"-"--''--~-'----'-~'----'-----''----'-----'---' -035 -03 -025 -02 ·015 -01 -005 ODS 01 015 02 Figure 1: The solution in input space for the hyperplane minimising W(o:, b) III equation (7). A hard margin was used with RBF kernels trained using (J" = 0.2 Medical Diagnosis. For detection of abnormalities in medical data we investigated performance on the Biomed dataset [5] from the Statlib data archive [14]. 04 02 ·06 ·06 '---~-~-~--'--~-~--~-' -DB -06 -0 4 02 04 06 08 Figure 2: In this example 4 outliers are ignored by using a soft margin (with A = 10.0). RBF kernels were used with (J" = 0.2 -005 -015 -025 D o .03 L..:::'---'-_~_~~_-,-_~_~~_~-' ·03 01 Figure 3: The solution in input space for a modified RBF kernel K (Xi, Xj) e- 1x,-xjl/2a 2 with (J" = 0.5 This dataset consisted of 194 observations each with 4 attributes corresponding to measurements made on blood samples (15 observations with missing values were removed). We trained the system on 100 randomly chosen normal observations from healthy patients. The system was then tested on 27 normal observations and 67 observations which exhibited abnormalities due to the presense of a rare genetic disease. In Figure 4 we plot the results for training the novelty detector using a hard margin and with RBF kernels. This plot gives the error rate (as a percentage) on the yaxis, versus (J" on the x-axis with the solid curve giving the performance on normal observations in the test data and the dashed curve giving performance on abnormal observations. Clearly, when (J" is very small the system puts a Gaussian of narrow width around each data point and hence all test data is labelled as abnormal. As (J" increases the model improves and at (J" = 1.1 all but 2 of the normal test observations are correctly labelled and 57 of the 67 abnormal observations are correctly labelled. As (J" increases to (J" = 10.0 the solution has 1 normal test observation incorrectly labelled and 29 abnormal observations correctly labelled. The kernel parameter (J" is therefore crucial is determining the balance between I~ 80 40 20 Figure 4: The error rate (as a percentage) on the y-axis, versus (J" on the x-axis. The solid curve giving the performance on normal observations in the test data and the dashed curve giving performance on abnormal observations. normality and abnormality. Future research on model selection may indicate a good choice for the kernel parameter. However, if the dataset is large enough and some abnormal events are known then a validation study can be used to determine the kernel parameter - as we illustrate with the application below. Interestingly, if we use an ensemble of models instead, with (J" chosen across a range, then the relative proportion indicating abnormality gives an approximate measure of the confidence in the novelty of an observation: 29 observations are abnormal for all (J" in Figure 4 and hence must be abnormal with high confidence. Condition Monitoring. Fault detection is an important generic problem in the condition monitoring of machinery: failure to detect faults can lead to machine damage, while an oversensitive fault detection system can lead to expensive and unnecessary downtime. An an example we will consider detection of 4 classes of fault in ball-bearing cages, which are often safety critical components in machines, vehicles and other systems such as aircraft wing flaps. In this study we used a dataset from the Structural Integrity and Damage Assessment Network [15]. Each instance consisted of 2048 samples of acceleration taken with a Bruel and Kjaer vibration analyser. After pre-processing with a discrete Fast Fourier Transform each such instance had 32 attributes characterising the measured signals. The dataset consisted of 5 categories: normal data corresponding to measurements made from new ball-bearings and 4 types of abnormalities which we will call type 1 (outer race completely broken), type 2 (broken cage with one loose element), type 3 (damaged cage with four loose elements) and type 4 (a badly worn ball-bearing with no evident damage) . To train the system we used 913 normal instances on new ball-bearings. Using RBF kernels the best value of (J" ((J" = 320.0) was found using a validation study consisting of 913 new normal instances, 747 instances of type 1 faults and 996 instances of type 2 faults. On new test data 98.7% of normal instances were correctly labelled (913 instances), 100% of type 1 instances were correctly labelled (747 instances) and 53.3% of type 2 instances were correctly labelled (996 instances). Of course, with ample normal and abnormal data this problem could also be approached using a binary classifier instead. Thus to evaluate performance on totally unseen abnormalities we tested the novelty detector on type 3 errors and type 4 errors (with 996 instances of each). The novelty detector labelled 28.3% of type 3 and 25.5% oftype 4 instances as abnormal- which was statistically significant against a background of 1.3% errors on normal data. 4 Conclusion In this paper we have presented a new kernelised novelty detection algorithm which uses linear programming techniques rather than quadratic programming. The algorithm is simple, easy to implement with standard LP software packages and it performs well in practice. The algorithm is also very fast in execution: for the 913 training examples used in the experiments on condition monitoring the model was constructed in about 4 seconds using a Silicon Graphics Origin 200. References [1] K. Bennett and C. Campbell. A Column Generation Algorithm for Novelty Detection. Preprint in preparation. [2] K. Bennett, A. Demiriz and J. Shawe-Taylor, A Column Generation Algorithm for Boosting. In Proceed. of Intl. Conf on Machine Learning. Stanford, CA, 2000. [3] C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2, p. 121-167, 1998. [4] C. Campbell. An Introduction to Kernel Methods. In: Radial Basis Function Networks: Design and Applications. R.J. Howlett and L.C. Jain (eds). Physica Verlag, Berlin, to appear. [5] L. Cox, M. Johnson and K. Kafadar. Exposition of Statistical Graphics Technology. ASA Proceedings of the Statistical Computation Section, p. 55-56, 1982. [6] O. L. Mangasarian and D. Musicant. Massive Support Vector Regression. Data Mining Institute Technical Report 99-02, University of Wisconsin-Madison, 1999. [7] B. Scholkopf, J.C. Platt, J. Shawe-Taylor, A.J. Smola, R.C. Williamson. Estimating the support of a high-dimensional distribution. Microsoft Research Corporation Technical Report MSR-TR-99-87, 1999, 2000 [8] B. Scholkopf, R. Williamson, A. Smola, and J. Shawe-Taylor. SV estimation of a distribution's support. In Neural Information Processing Systems, 2000, to appear. [9] B. Scholkopf, J. Platt and A. Smola. Kernel Method for Percentile Feature Extraction. Microsoft Technical Report MSR-TR-2000-22. [10] D. Tax and R. Duin. Data domain description by Support Vectors. In Proceedings of ESANN99, ed. M Verleysen, D. Facto Press, Brussels, p. 251-256, 1999. [11] D. Tax, A. Ypma, and R. Duin. Support vector data description applied to machine vibration analysis. In: M. Boasson, J . Kaandorp, J.Tonino, M. Vosselman (eds.), Proc. 5th Annual Conference of the Advanced School for Computing and Imaging (Heijen, NL, June 15-17), 1999, 398-405. [12] V. Vapnik. The Nature of Statistical Learning Theory. Springer, N.Y., 1995. [13] V. Vapnik. Statistical Learning Theory. Wiley, 1998. [14] cf. http) /lib.stat.cmu.edu/datasets [15] http://www.sidanet.org
|
2000
|
13
|
1,787
|
Robust Reinforcement Learning J un Morimoto Graduate School of Information Science N ara Institute of Science and Technology; Kawato Dynamic Brain Project, JST 2-2 Hikaridai Seika-cho Soraku-gun Kyoto 619-0288 JAPAN xmorimo@erato.atr.co.jp Kenji Doya ATR International; CREST, JST 2-2 Hikaridai Seika-cho Soraku-gun Kyoto 619-0288 JAPAN doya@isd.atr.co.jp Abstract This paper proposes a new reinforcement learning (RL) paradigm that explicitly takes into account input disturbance as well as modeling errors. The use of environmental models in RL is quite popular for both off-line learning by simulations and for on-line action planning. However, the difference between the model and the real environment can lead to unpredictable, often unwanted results. Based on the theory of Hoocontrol, we consider a differential game in which a 'disturbing' agent (disturber) tries to make the worst possible disturbance while a 'control' agent (actor) tries to make the best control input. The problem is formulated as finding a minmax solution of a value function that takes into account the norm of the output deviation and the norm of the disturbance. We derive on-line learning algorithms for estimating the value function and for calculating the worst disturbance and the best control in reference to the value function. We tested the paradigm, which we call "Robust Reinforcement Learning (RRL)," in the task of inverted pendulum. In the linear domain, the policy and the value function learned by the on-line algorithms coincided with those derived analytically by the linear Hootheory. For a fully nonlinear swingup task, the control by RRL achieved robust performance against changes in the pendulum weight and friction while a standard RL control could not deal with such environmental changes. 1 Introduction In this study, we propose a new reinforcement learning paradigm that we call "Robust Reinforcement Learning (RRL)." Plain, model-free reinforcement learning (RL) is desperately slow to be applied to on-line learning of real-world problems. Thus the use of environmental models have been quite common both for on-line action planning [3] and for off-line learning by simulation [4]. However, no model can be perfect and modeling errors can cause unpredictable results, sometimes worse than with no model at all. In fact, robustness against model uncertainty has been the main subject of research in control community for the last twenty years and the result is formalized as the "'Hoo" control theory [6). In general, a modeling error causes a deviation of the real system state from the state predicted by the model. This can be re-interpreted as a disturbance to the model. However, the problem is that the disturbance due to a modeling error can have a strong correlation and thus standard Gaussian assumption may not be valid. The basic strategy to achieve robustness is to keep the sensitivity I of the feedback control loop against a disturbance input small enough so that any disturbance due to the modeling error can be suppressed if the gain of mapping from the state error to the disturbance is bounded by 1;'. In the 'Hooparadigm, those 'disturbance-to-error' and 'error-to-disturbance' gains are measured by a max norms of the functional mappings in order to assure stability for any modes of disturbance. In the following, we briefly introduce the 'Hooparadigm and show that design of a robust controller can be achieved by finding a min-max solution of a value nmction, which is formulated as Hamilton-Jacobi-Isaacs (HJI) equation. We then derive on-line algorithms for estimating the value functions and for simultaneously deriving the worst disturbance and the best control that, respectively, maximizes and minimizes the value function. We test the validity of the algorithms first in a linear inverted pendulum task. It is verified that the value function as well as the disturbance and control policies derived by the on-line algorithm coincides with the solution of Riccati equations given by 'Hootheory. We then compare the performance of the robust RL algorithm with a standard model-based RL in a nonlinear task of pendulum swing-up [3). It is shown that robust RL controller can accommodate changes in the weight and the friction of the pendulum, which a standard RL controller cannot cope with. 2 H 00 Control W(s)--..-j u(s) (a) z(s) W~G z u y K (b) Figure 1: (a) Generalized Plant and Controller, (b) Small Gain Theorem The standard 'Hoocontrol [6) deals with a system shown in Fig.l(a), where G is the plant, K is the controller, u is the control input, y is the measurement available to the controller (in the following, we assume all the states are observable, i.e. y = x), w is unknown disturbance, and z is the error output that is desired to be kept small. In general, the controller K is designed to stabilize the closed loop system based on a model of the plant G. However, when there is a discrepancy between the model and the actual plant dynamics, the feedback loop could be unstable. The effect of modeling error can be equivalently represented as a disturbance w generated by an unknown mapping ~ of the plant output z, as shown in Fig.1(b). The goal of 1(,,,control problem is to design a controller K that brings the error z to zero while minimizing the Hoonorm of the closed loop transfer function from the disturbance w to the output z (1) Here II • 112 denotes £2 norm and i7 denotes maximum singular value. The small gain theorem assures that if IITzwiloo ~ 'Y, then the system shown in Fig. l(b) will be stable for any stable mapping ~ : z f-t w with 11~1100 < ~. 2.1 Min-max Solution to HooProblem We consider a dynamical system x = f(x, u, w). Hoocontrol problem is equivalent to finding a control output u that satisfies a constraint (2) against all possible disturbance w with x(O) = 0, because it implies (3) We can consider this problem as differential game[5] in which the best control output u that minimizes V is sought while the worst disturbance w that maximizes V is chosen. Thus an optimal value function V* is defined as V* = minmax (00 (zT(t)z(t) _ 'Y2wT(t)w(t))dt. u w 10 The condition for the optimal value function is given by oV* 0= minmax[zT z - 'Y2WTW + ~ f(x, u, w)] u w uX (4) (5) which is known as Hamilton-Jacobi-Isaacs (HJI) equation. From (5), we can derive the optimal control output u op and the worst disturbance wop by solving OZT Z oV of (x, u, w) _ 0 d OU + ox OU an OZT Z _ 2 T oV of (x, u, w) _ 0 oW 'YW + ox ow -. (6) 3 Robust Reinforcement Learning Here we consider a continuous-time formulation of reinforcement learning [3] with the system dynamics x = f(x, u) and the reward r(x, u). The basic goal is to find a policy u = g(x) that maximizes the cumulative future reward !too e-·~t r(x(s), u(s))ds for any given state x(t), where T is a time constant of evaluation. However, a particular policy that was optimized for a certain environment may perform badly when the environmental setting changes. In order to assure robust performance under changing environment or unknown disturbance, we introduce the notion of worst disturbance in 1i<x>control to the reinforcement learning paradigm. In this framework, we consider an augmented reward q(t) = r(x(t), u(t)) + s(w(t)), (7) where s(w(t)) is an additional reward for withstanding a disturbing input, for example, s(w) = 'Y2wT w. The augmented value function is then defined as V(x(t)) = 1 <X> e- ' -;' q(x(s), u(s), w(s))ds. (8) The optimal value function is given by the solution of a variant of HJI equation 1 aV* - V*(x) = maxmin[r(x, u) + s(w) + ~ f(x, u, w)]. T U W ux (9) Note that we can not find appropriate policies (Le. the solutions of the HJI equation) if we choose too small 'Y. In the robust reinforcement learning (RRL) paradigm, the value function is update by using the temporal difference (TD) error [3] 8(t) = q(t) ~ V(t) + V(t), while the best action and the worst disturbance are generated by maximizing and minimizing, respectively, the right hand side of HJI equation (9). We use a function approximator to implement the value function V(x(t); v), where y is a parameter vector. As in the standard continuous-time RL, we define eligibility trace for a parameter Vi as ei(s) = J; e- ' ;;' 8~jit)dt and update rule as ei(t) = -~ei(t) + 8~v~t) , where", is the time constant of the eligibility trace[3]. We can then derive learning rule for value function approximator [3] as Vi = rJ8(t)ei(t), where rJ denotes the learning rate. Note that we do not assume f(x = 0) = 0 because the error output z is generalized as the reward r(x, u) in RRL framework. 3.1 Actor-disturber-critic We propose actor-disturber-critic architecture by which we can implement robust RL in a model-free fashion as the actor-critic architecture[l]. We define the policies of the actor and the disturber implemented as u(t) = Au(x(t); yU) + nu(t) and w(t) = Aw(x(t); y W) +nw(t), respectively, where Au(x(t); y U) and Aw(x(t); yW) are function approximators with parameter vectors, yU and yW, and nu(t) and nw(t) are noise terms for exploration. The parameters of the actor and the disturber are updated by vr = rJu8(t)nu(t) aAu(;~~; yU) t (10) where rJu and rJw denote the learning rates. 3.2 Robust Policy by Value Gradient Now we assume that an input-Affine model of the system dynamics and quadratic models of the costs for the inputs are available as x f(x) + gl(X)W + g2(X)U r(x, u) = Q(x) - uTR(x)u, s(w) = 'Y2wT w. In this case, we can derive the best action and the worst disturbance in reference to the value function V as 1 -1 T 8V T u op = "2 R(X) g2 (X)( Ox) (11) We can use the policy (11) using the value gradient ~~ derived from the value function approximator. 3.3 Linear Quadratic Case Here we consider a case in which a linear dynamic model and quadratic reward models are available as x = Ax+B1w+B2u r(x, u) In this case, the value function is given by a quadratic form V = _xT Px, where P is the solution of a Riccati equation TIT -1 T 1 A P+ PA+ P('iB1B1 - B2R B2 )P+ Q = -Po , T (12) Thus we can derive the best action and the worst disturbance as (13) 4 Simulation We tested the robust RL algorithm in a task of swinging up a pendulum. The dynamics of the pendulum is given by ml2jj = -p,e + mgl sin /9 + T, where /9 is the angle from the upright position , T is input torque, p, = 0.01 is the coefficient of friction, m = 1.0[kg] is the weight of the pendulum, l = 1.0[m] is the length of the pendulum, and g = 9.8[m/s2 ] is the gravity acceleration. The state vector is defined as x = (/9,e)T. 4.1 Linear Case We first considered a linear problem in order to test if the value function and the policy learned by robust RL coincides with the analytic solution of 1icx:>control problem. Thus we use a locally linearized dynamics near the unstable equilibrium point x = (0, O)T. The matrices for the linear model are given by A= (~ ~~ ),B1= (~, ),B2= (~, ),Q= (~ ~ ),R=1. (14) The reward function is given by q( t) = _xT Qx - u2 + ,2W2, where robustness criteria, = 2.0. The value function, V = _xT Px, is parameterized by a symmetric matrix P. For on-line estimation of P, we define vectors x = (xi, 2X1X2, XDT, p = (Pll,P12,P22)T and reformulate V as V = _pTx. Each element of p is updated using recursive least squares method[2]. Note that we used pre-designed stabilizing controller as the initial setting of RRL controller for stable learning[2]. 4.1.1 Learning of the value function Here we used the policy by value gradient shown in section 3.2. Figure 2(a) shows that each element of the vector p converged to the solution of the Ricatti equation (12). 4.1.2 Actor-disturber-critic Here we used robust RL implemented by the actor-disturber-critic shown in section 3.1. In the linear case, the actor and the disturber are represented as the linear controllers, A,,(x; v") = v"x and Aw(x; VW) = vWx, respectively. The actor and the disturber were almost converged to the policy in (13) which derived from the Ricatti equation (12) (Fig. 2(b)). 100 80 P" .-----------------~ - --""'"~ lOf . :~------------------::---, -5 v, P22 -25 __ • •• __ • _________ •• _ •• ___ ~~ ___ _ ~~~e::::;;=:;';::~250~300 -3°0 50 100 150 200 250 300 Trials (a) Elements of p (b) Elements of v Figure 2: Time course of (a)elements of vector p = (Pll,P12,P22) and (b) elements of gain vector of the actor v" = (vf, v~) and the disturber VW = (vi", v2"). The dash-dotted lines show the solution of the Ricatti equation. 4.2 Applying Robust RL to a Non-linear Dynamics We consider non-linear dynamical system (11), where f(x) = ( ~ sine _ ~e ) ,gt{x) = ( ~ ) ,g2(X) = ( ~ ) Q(x) = cos(e) - 1, R(x) = 0.04. (15) From considering (7) and (15), the reward function is given by q(t) = cos(e) - 1 0.04u2 + "'·?w2 , where robustness criteria 'Y = 0.22. For approximating the value function, we used Normalized Gaussian Network (NGnet)[3]. Note that the input gain g(x) was also learned[3]. Fig.3 shows the value functions acquired by robust RL and standard model-based RL[3]. The value function acquired by robust RL has a shaper ridge (Fig.3(a)) attracts swing up trajectories than that learned with standard RL. In FigA, we compared the robustness between the robust RL and the standard RL. Both robust RL controller and the standard RL controller learned to swing up and hold a pendulum with the weight m = 1.0[m] and the coefficient of friction J-t = 0.01 (FigA(a)) . The robust RL controller could successfully swing up pendulums with different weight m = 3.0[kg] and the coefficient of friction J-t = 0.3 (FigA(b)). This result showed robustness of the robust RL controller. The standard RL controller could achieve the task in fewer swings for m = 1.0[kg] and J-t = 0.01 (FigA(a)). However, the standard RL controller could not swing up the pendulum with different weight and friction (FigA(b)). v 1 " ·00' - 1.00 1 - 2.001 th th (a) Robust RL (b) Standard RL Figure 3: Shape of the value function after 1000 learning trials with m = 1. 0 [kg] , l = 1.0[m], and J1, = 0.01. 2 - -------------r'--------1 .. 1 1 OS Ir_ '-"-"R~obu~ "' -' ::~~ ... -------------~~~-~ --~S~ m "~ d'~ ~~~3~~~~~ Time [sec) (a) m = 1.0, J.I, = 0.01 (\ \! \ .~. '. o ------ ------- -'-------I (b) m = 3.0,J.I, = 0.3 Figure 4: Swing up trajectories with pendulum with different weight and friction. The dash-dotted lines show upright position. 5 Conclusions In this study, we proposed new RL paradigm called "Robust Reinforcement Learning (RRL)." We showed that RRL can learn analytic solution of the 1-loocontroller in the linearized inverted pendulum dynamics and also showed that RRL can deal with modeling error which standard RL can not deal with in the non-linear inverted pendulum swing-up simulation example. We will apply RRL to more complex task like learning stand-up behavior[4]. References [1] A. G. Barto, R. S. Sutton, and C. W. Anderson. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Transactions on Systems, Man, and Cybernetics, 13:834- 846, 1983. [2] S. J. Bradtke. Reinforcement learning Applied to Linear Quadratic Regulation. In S. J. Hanson, J. D. Cowan, and C. L. Giles, editors, Advances in Neural Information Processing Systems 5, pages 295- 302. Morgan Kaufmann, San Mateo, CA, 1993. [3] K. Doya. Reinforcement Learning in Continuous Time and Space. Neural Computation, 12(1):219-245, 2000. [4] J . Morimoto and K. Doya. Acquisition of stand-up behavior by a real robot using hierarchical reinforcement learning. In Proceedings of Seventeenth International Conference on Machine Learning, pages 623- 630, San Francisco, CA, 2000. Morgan Kaufmann. [5] S. Weiland. Linear Quadratic Games, Hco , and the Riccati Equation. In Proceedings of the Workshop on the Riccati Equation in Control, Systems, and Signals, pages 156- 159. 1989. [6] K. Zhou, J . C. Doyle, and K. Glover. Robust Optimal Control. PRENTICE HALL, New Jersey, 1996.
|
2000
|
130
|
1,788
|
Explaining Away in Weight Space Peter Dayan Sham Kakade Gatsby Computational Neuroscience Unit, UCL 17 Queen Square London WCIN 3AR daya n @gat sby.ucl. ac . uk sham@gat sby.u cl. ac .uk Abstract Explaining away has mostly been considered in terms of inference of states in belief networks. We show how it can also arise in a Bayesian context in inference about the weights governing relationships such as those between stimuli and reinforcers in conditioning experiments such as bacA,'Ward blocking. We show how explaining away in weight space can be accounted for using an extension of a Kalman filter model; provide a new approximate way of looking at the Kalman gain matrix as a whitener for the correlation matrix of the observation process; suggest a network implementation of this whitener using an architecture due to Goodall; and show that the resulting model exhibits backward blocking. 1 Introduction The phenomenon of explaining away is commonplace in inference in belief networks. In this, an explanation (a setting of activities of unobserved units) that is consistent with certain observations is accorded a low posterior probability if another explanation for the same observations is favoured either by the prior or by other observations. Explaining away is typically realized by recurrent inference procedures, such as mean field inference (see Jordan, 1998). However, explaining away is not only important in the space of on-line explanations for data; it is also important in the space of weights. This is a very general problem that we illustrate using a phenomenon from animal conditioning called bad 'Ward blocking (Shanks, 1985; Miller & Matute, 1996). Conditioning paradigms are important because they provide a window onto processes of successful natural inference, which are frequently statistically normative. Backwards blocking poses a very different problem from standard explaining away, and rather complex theories have been advanced to account for it (eg Wagner & Brandon, 1989). We treat it as a case for Kalman jiltering, and suggest a novel network model for Kalman filtering to solve it. Consider three different conditioning paradigms associated with backwards blocking: name set 1 set 2 test forward L~R L,S~R S~ • backward L,S~R L~R S~ • sharing L,S~R --S~Rl2 These paradigms involve one or two sets of multiple learning trials (set 1 and set 2), in which stimuli (a light, L, and/or a sound, S) are conditioned to a reward (R), followed by a test phase, in which the strength of association between the sound S and reward is assessed. This is found to be weak (.) in forward and backward blocking, but stronger (Rf2) in the sharing paradigm. The effect that concerns this paper is occurring during the second set of trials during backward blocking in which the association between the sound and the reward is weakened (compared with sharing), even though the sound is not presented during these trials. The apparent association between the sound and the reward established in the first set of trials is explained away in the second set of trials. The standard explanation for this (Wagner's SOP model, see Wagner & Brandon, 1989) suggests that during the first set of trials, the light comes to predict the presence of the sound; and that during the second set of trials, the fact that the sound is expected (on the basis of the light, represented by the activation of 'opponent' sound units) but not presented, weakens the association between the sound and the reward. Not only does this suggestion lack a statistical basis, but also its network implementation requires that the activation of the opponent sound units makes weaker the weights from the standard sound units to reward. It is unclear how this could work. In this paper, we first extend the Kalman filter based conditioning theory of Sutton (1992) to the case of backward blocking. Next, we show the close relationship between the key quantity for a Kalman filter namely the covariance matrix of uncertainty about the relationship between the stimuli and the reward and the symmetric whitening matrix for the stimuli. Then we show how the Goodall algorithm for whitening (Goodall 1960; Atick & Redlich, 1993) makes for an appropriate network implementation for weight updates based on the Kalman filter. The final algorithm is a motivated mixture of unsupervised and reinforcement (or, equivalently in this case, supervised) learning. Last, we demonstrate backward blocking in the full model. 2 The Kalman filter and classical conditioning Sutton (1992) suggested that one can understand classical conditioning in terms of normative statistical inference. The idea is that on trial n there is a set of true weights Wn mediating the relationship between the presentation of stimuli Xn and the amount of reward Tn that is delivered, where (1) and En '" N[O, T2] is zero-mean Gaussian noise, independent from one trial to the next. l For the cases above, Xn = (x~,x~) might have two dimensions, one each for light and sound, taking on values that are binary, representing the presence and absence of the stimuli. Similarly, W n = (w~, w~) also has two dimensions. Crucially, to allow for the possibility (realized in most conditioning experiments) that the true weights might change, the model includes a diffusion term W n+1 = Wn + 11n (2) where 11n '" N[O, (72][] is also Gaussian. The task for the animal is to take observations of the stimuli {xn} and rewards {Tn} and infer a distribution over W n . Provided that the initial uncertainty can be captured as Wo '" N[O, ~o] for some covariance matrix ~o , inference takes the form of a standard recursive Kalman filter, for which P(WnITl ... Tn-d '" N[wn, ~n] and , , ~n . Xn ( , ) Wn+l =wn + ~ 2 Tn-Wn'Xn Xn • LIn . Xn + T (3) ~ _ ~ 2][ _ ~n' XnXn . ~n Lln+l LIn + (7 ~ 2 Xn . LIn . Xn + T (4) iPor vectors a, b, matrix C, a· b = I:i aibi, a· C· b = I:ij aiCijbj, matrix [ab]ij = aibj. If 1;n ex n, then the update for the mean can be seen as a standard delta rule (Widrow & Stearns, 1985; Rescorla & Wagner, 1972), involving the prediction error (or innovation) On = rn - wn . x n . Note the familiar, but at first sight counterintuitive, result that the update for the covariance matrix does not depend on the innovation or the observed rn .2 In backward blocking, in the first set of trials, the off-diagonal terms of the covariance matrix 1;n become negative. This can either be seen from the form of the update equation for the covariance matrix (since Xn '" (1,1)), or, more intuitivep', from the fact that these trials imply a constraint only on w* + w~, therefore forcing wn and w* to be negatively correlated. The consequence of this negative correlation in the second set of trials is that the S component of 1;n . Xn = 1;n . (1,0) is less than 0, and so, via equation 3, w~ reduces. This is exactly the result in backward blocking. Another way of looking at this is in terms of explaining away in weight space. From the first set of trials, the animal infers that w* + w~ = R > 0; from the second, that the prediction owes to w* rather than w~, and so the old value w~ = R/2 is explained away by w*. Sutton (1992) actually suggested the approximation of forcing the off-diagonal components of the covariance matrix 1;n to be 0, which, of course, prevents the system from accounting for backward blocking. We seek a network account of explaining away in the space of weights by implementing an approximate form of Kalman filtering. 3 Whitening and the Kalman filter In conventional applications of the Kalman filter, Xn would typically be constant. That is, the hidden state (wn ) would be observed through a fixed observation process. In cases such as classical conditioning, though, this is not true - we are interested in the case that Xn changes over time, possibly even in a random (though fully observable) way. The plan for this section is to derive an approximate relationship between the average covariance matrix over the weights f; and a whitening matrix for the stimulus inputs. In the next section, we consider an implementation of a particular whitening algorithm as an unsupervised way of estimating the covariance matrix for the Kalman filter and show how to use it to learn the weights W n appropriately. Consider the case that Xn are random, with correlation matrix (xx) = Q, and consider the mean covariance matrix f; for the Kalman filter, averaging across the variation in x. Make the approximation that / f;. xx . f;) (f; . xx . f;) \X.1; ' X+T2 (X·1;·X+T2) which is less drastic than it might first appear since the denominator is just a scalar. Then, we can solve for the average of the asymptotic value of f; in the equation for the update of the Kalman filter as f;Qf; ex n (5) Thus f; is a whitening filter for the correlation matrix Q of the inputs {x}. Symmetric whitening filters (f; must be symmetric) are generally unique (Atick & Redlich, 1993). This result is very different from the standard relationship between Kalman filtering and whitening. The standard Kalman filter is a whitening filter for the innovations process on = rn - wn . Xn , ie as extracting all the systematic variation into W n, leaving only random variation due to the observation noise and the diffusion process. Equation 5 is an additional level of whitening, saying that one can look at the long-run average covariance 2Note also the use of the alternative form of the Kalman filter, in which we perform observation/conditioning followed by drift, rather than drift followed by observation/conditioning. A ]j10 c 8. E 8 8 ~ 6 o ~4 '5 ~ 2 on diagonal component off diagonal com anent B x y(t) Figure 1: Whitening. A) The lower curve shows the average maximum off-diagonal element of IfjQfjl as a function of v . The upper curve shows the average maximum diagonal element of the same matrix. The off-diagonal components are around an order of magnitude smaller than the ondiagonal components, even in the difficult regime where v is near 0, and thus the matrix Q is nearly singular. B) Network model for Kalman filtering. Identity feedforward weights 1I map inputs x to a recurrent network yet} whose output is used to make predictions. Learning of the recurrent weights B is based on Goodall's (1960) rule; learning of the prediction weights is based on the delta rule, only using yeO} to make the predictions and y(oo} to change the weights. matrix of the uncertainty in Wn as whitening the input process x n . This is inherently unsupervised, in that whitening takes place without any reference to the observed rewards (or even the innovation). Given the approximation, we tested whether f; really whitens Q by by generating Xn from a Gaussian distribution, with mean (1,1) and variance v 2II, calculating the long-run average value of f;, and assessing whether f = f;Qf; is white. There is no unique measure for the deviation of f from being diagonal; as an example, figure lA shows as a function of v the largest on- and off-diagonal elements of f . The figure shows that the off-diagonal components are comparatively very small, even when v is very small, for which Q has an eigenvalue very near to 0 making the whitening matrix nearly undefined. Equally, in this case, ~n tends to have very large values, since, looking at equation 4, the growth in uncertainty coming from a 2II is not balanced by any observation in the direction (1, -1) that is orthogonal to (1,1). Of course, only the long-run average covariance matrix f; whitens Q. We make the further approximation of using an on-line estimate of the symmetric whitening matrix as the online estimate of the covariance of the weights ~n . 4 A network model Figure IB shows a network model in which prediction weights wn adapt in a manner that is appropriately sensitive to a learned, on-line, estimate of the whitening matrix. The network has two components, a mapping from input x to output y(t), via recurrent feedback weights B (the Goodall (1960) whitening filter), and a mapping from y, through a set of prediction weights W to an estimate of the reward. The second part of the network is most straightforward. The feedforward weights from x to yare just the identity matrix II. Therefore, the initial value in the hidden layer in response to stimulus Xn is y(O) = X n , and so the prediction of reward is just w . y(O) = W . X n . The first part of the network is a straightforward implementation of Goodall's whitening filter (Goodall, 1960; Atick & Redlich, 1993). The recurrent dynamics in the y-Iayer are taken as being purely linear. Therefore, in response to input x (propagated through the identity feedforward weights) TY = -y+x+By and so y( 00) = (II - B)-lX, provided that the inverse exists. Goodall's algorithm changes the recurrent weights B using local, anti-Hebbian learning, according to tl.B ()( -xy + II - B . (6) This rule stabilizes on average when II = (II - B)-lQ[(II - B)-l], that is when (II - B)-l is a whitening filter for the correlation matrix Q of the inputs. If B is symmetric, which can be guaranteed by making B = (()) initially (Atick & Redlich, 1993), then, by convergence, we have (II - B)-l = f:; and, given input Xn to the network -1 ~Xn = (II - B) Xn = Yn(oo) Therefore, we can implement a learning rule for the prediction weights akin to the Kalman filter (equation 3) using (7) This is the standard delta rule, except that the predictions are based on Yn(O) X n , whereas the weight changes are based on Y n( 00) = f:;xn . The learning rule gets wrong the absolute magnitude of the weight changes (since it lacks the Xn . ~n . Xn + T2 term on the denominator - but it gets right the direction of the changes. 5 Results Figure 2 shows the result of learning in backward blocking. In association with Tn = 1, first stimulus Xn = (1,1) was presented for 20 trials, then stimulus Xn = (1,0) was presented for a further 20 trials. Figure 2A shows the development of the weights w~ (solid) and w~ (dashed). During the first set of trials, these grow towards 0.5; during the second set, they differentiate sharply with the weight associated with the light growing towards 1, and that with the sound, which is explained away, growing towards O. Figure 2B shows the development of two terms in the estimated covariance matrix. The negative covariance between light and sound is evident, and causes the sharp changes in the weights on the 21st trial. Figure 2C & D show the values using the exact Kalman filter, showing qualitatively similar behavior. The increases in the magnitudes of ~~L and ~~s during the first sta~e of backwards blocking come from the lack of information in the input about w~ - wn ' despite its continual diffusion (from equation 2). Thus backwards blocking is a pathological case. Nevertheless, the on-line method for estimating ~ captures the correct behavior. Figures 2 E-H show a non-pathological case with observation noise added. The estimates from the model closely match those of the exact Kalman filter, a result that is is also true for other non-pathological cases. 6 Discussion We have shown how the standard Kalman filter produces explaining away in the space of weights, and suggested and proved efficacious a natural network model for implementing the Kalman filter. The model mixes unsupervised learning of a whitener for the observation process (ie the Xn of equation 1), providing the covariance matrix governing the uncertainty in the weights, with supervised (or equivalently reinforcement) learning of the mean values of the weights. Unsupervised learning is reasonable since the evolution of the covariance matrix of the weights is independent of the innovations. The basic result is an A , ,; DB D2 L __ ~~ . ___ . ___ "0 20 40 trial E , .DB , ~ 04 /.- ........ ----.... _' ••• __ ~_~. ___ • ___ • °O~~-~20:;______;;c-__! ' trial B . i 4 ;;f , E~L 0-·. I -, '. E~L e .. F '. \j -'0 10 20 trial , 04 °O~~-~20:;______;;c-__!' trial 00 D01~ ~ ':.... B:~ ...... . .. " ····· .......... j/~~L .. , o 10 20 3D 40 20 trial trial Figure 2: Backward blocking in the full model. A) The development of w over 20 trials with Xn = (1,1) and 20 with Xn = (1,0). B) The development of the estimated covariance of the weight for the light 'E~L and cross-covariance between the light and the sound 'E~s. The learning rates in equations 6 and 7 were both 0.125.C & D) The development of wand 'E from the exact Kalman filter with parameters (IT = .09 and T = 0.35). E) The development of w as in A) except with multiplicative Gaussian noise added (ie noise with standard deviation 0.35 is added only to the representations of stimuli that are present). F & G) The comparison of w in the model (solid line) and in the exact Kalman filter (dashed line), using the sarne parameters for the Kalman filter as in C) and D). H) A comparison of the true covariance, 'En (dashed line), with the rescaled estimate, (][ - B)-1 (solid line). approximation, but one that has been shown to match results quite closely. Further work is needed to understand how to set the parameters of the Goodall learning rule to match 0'2 and 7 2 exactly. Hinton (personal communication) has suggested an alternative interpretation of Kalman filtering based on a heteroassociative novelty filter. Here, the idea is to use the recurrent network B only once, rather than to equilibrium, with (as for our model) Yn(O) = Xn, the prediction v = wn . Yn(O), Yn(l) = Bn . Xn, and b.wn IX Yn(l) (Tn - Wn ' Yn(O)) . This gives Bn a similar role to ~n in learning wn . For the novelty filter, b. _ Bn . XnXn . Bn Bn-JBn . xn J2 ' which makes the network a perfect heteroassociator between Xn and Tn. If we compare the update for Bn to that for ~n (equation 4), we can see that it amounts approximately to assuming neither observation noise nor drift. Thus, whereas our network model approximates the long-run covariance matrix, the novelty filter approximates the instantaneous covariance matrix directly, and could clearly be adapted to take account of noise. Unfortunately, there are few quantitatively precise experimental results on backwards blocking, so it is hard to choose between different possible rules. There is a further alternative. Sutton (1992) suggested an online way of estimating the elements of the covariance matrix, observing that E[t5~l = 7 2 + Xn . ~n . Xn (8) and so considered using a standard delta rule to fit the square innovation using a quadratic input representation ((X~)2, (X~)2 , x~ X x~, 1) .3 The weight associated with the last ele3 Although the x~ x x~ term was omitted from Sutton's diagonal approximation to 'En. ment, ie the bias, should come to be the observation noise 7 2 ; the weights associated with the other elements are just the components of ~n. The most critical concern about this is that it is not obvious how to use the resulting covariance matrix to control learning about the mean values of the weights. There is also the more theoretical concern that the covariance matrix should really be independent of the prediction errors, one manifestation of which is that the occurrence of backward blocking in the model of equation 8 is strongly sensitive to initial conditions. Although backward blocking is a robust phenomenon, particularly in human conditioning experiments (Shanks, 1985), it is not observed in all animal conditioning paradigms. One possibility for why not is that the anatomical substrate of the cross-modal recurrent network (the B weights in the model) is not ubiquitously available. In its absence, y( 00) = y(O) = Xn in response to an input X n , and so the network will perform like the standard delta or Rescorla-Wagner (Rescorla & Wagner, 1972) rule. The Kalman filter is only one part of a more complicated picture for statistically normative models of conditioning. It makes for a particularly clear example of what is incomplete about some of our own learning rules (notably Kakade & Dayan, 2000) which suggest that, at least in some circumstances, learning about the two different stimuli should progress completely independently. We are presently trying to integrate on-line and learned competitive and additive effects using ideas from mixture models and Kalman filters. Acknowledgements We are very grateful to David Shanks, Rich Sutton, Read Montague and Terry Sejnowski for discussions of the Kalman filter model and its relationship to backward blocking, and to Sam Roweis for comments on the paper. This work was funded by the Gatsby Charitable Foundation and the NSF. References Atick, JJ & Redlich, AN (1993) Convergent algorithm for sensory receptive field development. Neural Computation 5:45-60. Goodall, MC (1960) Performance of stochastic net. Nature 185:557-558. Jordan, MI, editor (1998) Learning in Graphical Models. Dordrecht: Kluwer. Kakade, S & Dayan, P (2000) Acquisition in autoshaping. In SA Solla, TK Leen & K-R Muller, editors, Advances in Neural Information Processing Systems, 12. Miller, RR & Matute, H (1996). Biological significance in forward and backward blocking: Resolution of a discrepancy between animal conditioning and human causal judgment. Journal of Experimental Psychology: General 125:370-386. Rescorla, RA & Wagner, AR (1972) A theory of Pavlovian conditioning: The effectiveness of reinforcement and non-reinforcement. In AH Black & WF Prokasy, editors, Classical Conditioning II: Current Research and Theory. New York:Aleton-Century-Crofts, 64-69. Shanks, DR (1985). Forward and backward blocking in human contingency judgement. Quarterly Journal of Experimental Psychology: Comparative & Physiological P5ychology 37:1-21. Sutton, RS (1992). Gain adaptation beats least squares? In Proceedings of the 7th Yale Workshop on Adaptive and Learning Systems. Wagner, AR & Brandon, SE (1989). Evolution of a structured connectionist model of Pavlovian conditioning (AESOP). In SB Klein & RR Mowrer, editors, Contemporary Learning Theories. Hillsdale, NJ: Erlbaum, 149-189. Widrow, B & Stearns, SD (1985) Adaptive Signal Processing. Englewood Cliffs, NJ:Prentice-Hall.
|
2000
|
131
|
1,789
|
Improved Output Coding for Classification Using Continuous Relaxation Koby Crammer and Yoram Singer School of Computer Science & Engineering The Hebrew University, Jerusalem 91904, Israel {kobi cs ,sing e r }@ cs.huji.ac .il Abstract Output coding is a general method for solving multiclass problems by reducing them to multiple binary classification problems. Previous research on output coding has employed, almost solely, predefined discrete codes. We describe an algorithm that improves the performance of output codes by relaxing them to continuous codes. The relaxation procedure is cast as an optimization problem and is reminiscent of the quadratic program for support vector machines. We describe experiments with the proposed algorithm, comparing it to standard discrete output codes. The experimental results indicate that continuous relaxations of output codes often improve the generalization performance, especially for short codes. 1 Introduction The problem of multiclass categorization is about assigning labels to instances where the labels are drawn from some finite set. Many machine learning problems include a multiclass categorization component in them. Examples for such applications are text classification, optical character recognition, medical analysis, and object recognition in machine vision. There are many algorithms for the binary class problem, where there are only two possible labels, such as SVM [17], CART [4] and C4.5 [14]. Some of them can be extended to handle multiclass problems. An alternative and general approach is to reduce a multiclass problem to a multiple binary problems. In [9] Dietterich and Bakiri described a method for reducing multiclass problems to multiple binary problems based on error correcting output codes (ECOC). Their method consists of two stages. In the training stage, a set of binary classifiers is constructed, where each classifier is trained to distinguish between two disjoint subsets of the labels. In the classification stage, each of the trained binary classifiers is applied to test instances and a voting scheme is used to decide on the label. Experimental work has shown that the output coding approach can improve performance in a wide range of problems such as text classification [3], text to speech synthesis [8], cloud classification [1] and others [9, 10, 15]. The performance of output coding was also analyzed in statistics and learning theoretic contexts [11, 12, 16, 2]. Most of previous work on output coding has concentrated on the problem of solving multiclass problems using predefined output codes, independently of the specific application and the learning algorithm used to construct the binary classifiers. Furthermore, the "decoding" scheme assigns the same weight to each learned binary classifier, regardless of its performance. Last, the induced binary problems are treated as separate problems and are learned independently. Thus, there might be strong statistical correlations between the resulting classifiers, especially when the induced binary problems are similar. These problems call for an improved output coding scheme. In a recent theoretical work [7] we suggested a relaxation of discrete output codes to continuous codes where each entry of the code matrix is a real number. As in discrete codes, each column of the code matrix defines a partition of the set of the labels into two subsets which are labeled positive (+) and negative (-). The sign of each entry in the code matrix determines the subset association (+ or -) and magnitude corresponds to the confidence in this association. In this paper we discuss the usage of continuous codes for multiclass problems using a two phase approach. First, we create a binary output code matrix that is used to train binary classifiers in the same way suggested by Dietterich and Bakiri. Given the trained classifiers and some training data we look for a more suitable continuous code by casting the problem as a constrained optimization problem. We then replace the original binary code with the improved continuous code and proceed analogously to classify new test instances. An important property of our algorithm is that the resulting continuous code can be expressed as a linear combination of a subset of the training patterns. Since classification of new instances is performed using scalar products between the predictions vector of the binary classifiers and the rows of the code matrix, we can exploit this particular form of the code matrix and use kernels [17] to construct high dimensional product spaces. This approach enables an efficient and simple way to take into account correlations between the different binary classifiers. The rest of this paper is organized as follows. In the next section we formally describe the framework that uses output coding for multiclass problems. In Sec. 3 we describe our algorithm for designing a continuous code from a set of binary classifiers. We describe and discuss experiments with the proposed approach in Sec. 4 and conclude in Sec. 5. 2 Multiclass learning using output coding Let S = {(Xl, Yl)"'" (xm, Ym)} be a set of m training examples where each instance Xi belongs to a domain X. We assume without loss of generality that each label Yi is an integer from the set Y = {I, ... , k}. A multiclass classifier is a function H : X -+ Y that maps an instance X into an element Y of y. In this work we focus on a framework that uses output codes to build multiclass classifiers from binary classifiers. A binary output code M is a matrix of size k x lover { -1, + I} where each row of M correspond to a class Y E y. Each column of M defines a partition of Y into two disjoint sets. Binary learning algorithms are used to construct classifiers, one for each column t of M. That is, the set of examples induced by column t of M is (Xl, Mt,yJ, . .. , (Xm, Mt,y~). This set is fed as training data to a learning algorithm that finds a binary classifier. In this work we assume that each binary classifier ht is of the form ht : X -+ R This reduction yields l different binary classifiers hl , ... , ht. We denote the vector of predictions of these classifiers on an instance X by h(x) = (h l (x), ... , ht(x)). We denote the rth row of M by Mr. Given an example X we predict the label Y for which the row My is the "most similar" to h(x). We use a general notion of similarity and define it through an inner-product function K : JRt X JRt -+ JR. The higher the value of K(h(x), Mr) is the more confident we are that r is the correct label of x according to the set of classifiers h. Note that this notion of similarity holds for both discrete and continuous matrices. An example of a simple similarity function is K(u, v) = u . v. It is easy to verify that when both the output code and the binary classifiers are over { -1, + I} this choice of K is equivalent to picking the row of M which attains the minimal Hamming distance to h(x). To summarize, the learning algorithm receives a training set S, a discrete output code (matrix) of size k x l, and has access to a binary learning algorithm, denoted L. The learning algorithm L is called l times, once for each induced binary problem. The result of this process is a set of binary classifiers h(x) = (hI (x), ... , hl(x)). These classifiers are fed, together with the original labels YI, ... , Ym to our second stage of the learning algorithm which learns a continuous code. This continuous code is then used to classify new instances by choosing the class which correspond to a row with the largest innerproduct. The resulting classifier can be viewed as a two-layer neural network. The first (hidden) layer computes hI (x), ... , hi (x) and the output unit predicts the final class by choosing the label r which maximizes K(h(x), Mr). 3 Finding an improved continuous code We now describe our method for finding a continuous code that improves on a given ensemble of binary classifiers h. We would like to note that we do not need to know the original code that was originally used to train the binary classifiers. For simplicity we use the standard scalar-product as our similarity function. We discuss at the end of this section more general similarity functions based on kernels which satisfy Mercer conditions. The approach we take is to cast the code design problem as a constrained optimization problem. The multiclass empirical error is given by where [7f] is equal to 1 if the predicate 7f holds and 0 otherwise. Borrowing the idea of soft margins [6] we replace the 0-1 multiclass error with a piece wise linear bound maxr{h(xi) . Mr + by"r} -h(Xi) . My" where bi,j = 1 - Oi,j, i.e., it is equal 0 if i = j and 1 otherwise. We now get an upper bound on the empirical loss f.s(M, h) ~ ~ f [m:x{h(xi) . Mr + by"r} -h(Xi) . My,] i=1 (1) Put another way, the correct label should have a confidence value that is larger by at least one than any of the confidences for the rest of the labels. Otherwise, we suffer loss which is linearly proportional to the difference between the confidence of the correct label and the maximum among the confidences of the other labels. Define the l2-norm of a code M to be the l2-norm of the vector represented by the concatenation of M's rows, IIMII~ = II(MI , ... , Mk)ll~ = Ei,j Mi~j , where f3 > 0 is a regularization constant. We now cast the problem of finding a good code which minimizes the bound Eq. (1) as a quadratic optimization problem with "soft" constraints, 1 m 2f3IIMII~ + L~i i=1 Solving the above optimization problem is done using its dual problem (details are omitted due to lack of space). The solution of the dual problem result in the following form for M (3) where 'T/i,r are variables of the dual problem which satisfy Vi, r : 'T/i,r ~ 0 and L:r 'T/i,r = 1. Eq. (3) implies that when the optimum of the objective function is achieved each row of the matrix M is a linear combination of li(Xi). We thus say that example i is a support pattern for class r if the coefficient (t5yi ,r - 'T/i,r) of li(Xi) in Eq. (3) is non-zero. There are two cases for which example i can be a support pattern for class r: The first is when Yi = rand 'T/i,r < 1. The second case is when Yi i' rand 'T/i,r > O. Put another way, fixing i, we can view 'T/i,r as a distribution, iii, over the labels r. This distribution should give a high probability to the correct label Yi. Thus, an example i "participates" in the solution for M (Eq. (3» if and only if iii is not a point distribution concentrating on the correct label Yi. Since the continuous output code is constructed from the support patterns, we call our algorithm SPOC for Support Patterns Output Coding. Denote by Ti = Iy. - iii. Thus, from Eq. (3) we obtain the classifier, H(x) = argm:x {li(x) . Mr} = argm:x { ~ Ti,r [li(x) ·li(Xi)] } (4) Note that solution as defined by Eq. (4) is composed of inner-products of the prediction vector on a new instance with the support patterns. Therefore, we can transform each prediction vector to some high dimensional inner-product space Z using a transformation ¢ : ]Rl -t Z. We thus replace the inner-product in the dual program with a general innerproduct kernel K that satisfies Mercer conditions [17]. From Eq. (4) we obtain the kernelbased classification rule H(x), (5) The ability to use kernels as a means for calculating inner-products enables a simple and efficient way to take into account correlations between the binary classifiers. For instance, a second order polynomial of the form (1 + iiii)2 correspond to a transformation to a feature space that includes all the products of pairs of binary classifiers. Therefore, the relaxation of discrete codes to continuous codes offers a partial remedy by assigning different importance weight to each binary classifier while taking into account the statistical correlations between the binary classifiers. 4 Experiments In this section we describe experiments we performed comparing discrete and continuous output codes. We selected eight multiclass datasets, seven from the VCI repository! and the mnist dataset available from AT&T2. When a test set was provided we used the original split into training and test sets, otherwise we used 5-fold cross validation for evaluating the test error. Since we ran multiple experiments with 3 different codes, 7 kernels, and two base-learners, we used a subset of the training set formnist, letter, and shut tIe. We are in the process of performing experiments with the complete datasets and other datasets using a subset of the kernels. A summary of data sets is given in Table 1. Ihttp://www.ics.uci.edllimlearnIMLRepository.html 2http://www.research.att.comryann/ocr/mnist Name No. of No. of No. of No. of Training Examples Test Examples Classes Attributes satimage 4435 2000 6 36 shuttle 5000 9000 7 9 mnist 5000 10000 10 784 isolet 6238 1559 26 6 letter 5000 4000 26 16 vowel 528 462 11 10 glass 214 5-fold eval 7 10 soybean 307 376 19 35 Table 1: Description of the datasets used in experiments. We tested three different types of codes: one-against-all (denoted "id"), BCH (a linear error correcting code), and random codes. For a classification problem with k classes we set the random code to have about 10 log2 (k) columns. We then set each entry in the matrix defining the code to be -1 or + 1 uniformly at random. We used SVM as the base binary learning algorithm in two different modes: In the first mode we used the margin of the vector machine classifier as its real-valued prediction. That is, each binary classifier ht is of the fonn ht(x) = w·x+b where wand b are the parameters ofthe separating hyperplane. In the second mode we thresholded the prediction of the classifiers, ht(x) = sign(w·x+b). Thus, each binary classifier ht in this case is of the fonn ht : X -t {-I, +1}. For brevity, we refer to these classifiers as thresholded-SVMs. We would like to note in passing that this setting is by no means superficial as there are learning algorithms, such as RIPPER [5], that build classifiers of this type. We ran SPOC with 7 different kernels: homogeneous and non-homogeneous polynomials of degree 1,2, and 3, and radial-basis-functions (RBF). A summary of the results is depicted in Figure 1. The figure contains four plots. Each plot show the relative test error difference between discrete and continuous codes. Formally, the height of each bar is proportional to (Ed Ec) / Ed where Ed (Ec) is the test error when using a discrete (continuous) code. For each problem there are three bars, one for each type of code (one-against-all, BCH, and random). The datasets are plotted left to right in decreasing order with respect to the number of training examples per class. The left plots correspond to the results obtained using thresholded-SVM as the base binary classifier and right plots show the results using the real-valued predictions. For each mode we show the results of best performing kernel on each dataset (top plots) and the average (over the 7 different kernels) performance (bottom plots). In general, the continuous output code relaxation indeed results in an improved performance over the original discrete output codes. The most significant improvements are achieved with thresholded-SVM as the base binary classifiers. On most problems all the kernels achieve some improvement. However, the best performing kernel seems to be problem dependent. Impressive improvements are achieved for data sets with a large number of training examples per class, shuttle being a notable example. For this dataset the test error is reduced from an average of over 3% when using discrete code to an average test error which is significantly lower than 1% for continuous codes. Furthennore, using a non-homogeneous polynomial of degree 3 reduces the test error rate down to 0.48%. In contrast, for the soyb e a n dataset, which contains 307 training examples, and 19 classes, none of the kernels achieved any improvement, and often resulted in an increase in the test error. Examining the training error reveals that the greater the decrease in the training error due to the continuous code relaxation the worse the increase in the corresponding test error. This behavior indicates that SPOC overfitted the relatively small training set. 0 _ _ BeH 80 . ,.;n;!om • , SO ! t" L. ~II ~ _.1 _II ~.I 0.. 011 °1satmage stulle Isolel leller glass soybean ~_ __I _._ 0._ oil ~II sallmage shlAlle m~st Isolel leller vowel glass soybean Figure 1: Comparison of the performance of discrete and continuous output codes using SVM (right figures) and thresholded-SVM (left figures) as the base learner for three different families of codes. The top figures show the relative change in test error for the best performing kernel and the bottom figures show the relative change in test error averaged across seven different kernels. To conclude this section we describe an experiment that evaluated the performance of the SPOC algorithm as a function of the length of random codes. Using the same setting described above we ran SPOC with random codes of lengths 5 through 35 for the vowel dataset and lengths 15 through 50 for the let te r dataset. In Figure 2 we show the test error rate as a function of the the code length with SVM as the base binary learner. (Similar results were obtained using thresholded-SVM as the base binary classifiers.) For the lette r dataset we see consistent and significant improvements of the continuous codes over the discrete ones whereas for vowe l dataset there is a major improvement for short codes that decays with the code's length. Therefore, since continuous codes can achieve performance comparable to much longer discrete codes they may serve as a viable alternative for discrete codes when computational power is limited or for classification tasks of large datasets. 5 Discussion In this paper we described and experimented with an algorithm for continuous relaxation of output codes for multiclass categorization problems. The algorithm appears to be especially useful when the codes are short. An interesting question is whether the proposed approach can be generalized by calling the algorithm successively on the previous code it improved. Another viable direction is to try to combine the algorithm with other scheme for reducing , . , ~ 50 l i \ • vowel -e- Discrete -0- Conlinuous • , , o. letter -e- o..avte -0- COltinUDUI '(9.€J-e.Q--q p GI 'I!! E'.I b/ O--a-0-C:H3, 9 Q O--o-Q. ",e-, -----!c----=~~,=_ , _---= ,,_----c'c-_~b-~----"''! c:ode length Figure 2: Comparison of the performance of discrete random codes and their continuous relaxation as a function of the code length. multiclass problems to multiple binary problems such as tree-based codes and directed acyclic graphs [13]. We leave this for future research. References [1] D. W. Aha and R. L. Bankert. Cloud classification using error-correcting output codes. In Artificial Intelligence Applications: Natural Science, Agriculture, and Environmental Science, volume 11, pages 13- 28, 1997. [2] E.L. Allwein, R.E. Schapire, and Y. Singer. Reducing multiclass to binary: A unifying approach for margin classifiers. In Machine Learning: Proceedings of the Seventeenth International Conference, 2000. [3] A. Berger. Error-correcting output coding for text classification. In IJCAJ'99: Workshop on machine learning for information filtering, 1999. [4] Leo Breiman, Jerome H. Friedman, Richard A. Olshen, and Charles J. Stone. Classification and Regression Trees. Wadsworth & Brooks, 1984. [5] William Cohen. Fast effective rule induction. In Proceedings of the Twelfth International Conference on Machine Learning, pages 115- 123, 1995. [6] Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine Learning, 20(3):273297, September 1995. [7] Koby Crammer and Yoram Singer. On the learnability and design of output codes for multiclass problems. In Proceedings of the Thirteenth Annual Conference on Computational Learning Theory, 2000. [8] Ghulum Bakiri Thomas G. Dietterich. Achieving high-accuracy text-to-speech with machine learning. In Data mining in ~peech synthesis, 1999. [9] Thomas G. Dietterich and Ghulum Bakiri. Solving multiclass learning problems via errorcorrecting output codes. Journal of Artificial Intelligence Research, 2:263- 286, January 1995. [10] Tom Dietterich and Eun Bae Kong. Machine learning bias, statistical bias, and statistical variance of decision tree algorithms. Technical report, Oregon State University, 1995. Available via the WWW at http://www.cs.orst.edu:801'''tgd/cv/tr.html. [11] Trevor Hastie and Robert Tibshirani. Classification by pairwise coupling. The Annals of Statistics, 26(1):451--471, 1998. [12] G. James and T. Hastie. The error coding method and PiCT. Journal of computational and graphical stastistics, 7(3):377- 387, 1998. [13] J.C. Platt, N. Cristianini, and J. Shawe-Taylor. Large margin dags for multiclass classification. In Advances in Neural Information Processing Systems 12. MIT Press, 2000. (To appear.). [14] J. Ross Quillian. C4.5: Programs for Machine Learning. Morgan Kaufmann, 1993. [15] Robert E. Schapire. Using output codes to boost multiclass learning problems. In Machine Learning: Proceedings of the Fourteenth International Conference, pages 313- 321, 1997. [16] Robert E. Schapire and Yoram Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37(3):1--40, 1999. [17] Vladimir N. Vapnik. Statistical Learning Theory. Wiley, 1998.
|
2000
|
132
|
1,790
|
Learning curves for Gaussian processes regression: A framework for good approximations Dorthe Malzahn Manfred Opper Neural Computing Research Group School of Engineering and Applied Science Aston University, Birmingham B4 7ET, United Kingdom. [malzahnd.opperm]~aston.ac.uk Abstract Based on a statistical mechanics approach, we develop a method for approximately computing average case learning curves for Gaussian process regression models. The approximation works well in the large sample size limit and for arbitrary dimensionality of the input space. We explain how the approximation can be systematically improved and argue that similar techniques can be applied to general likelihood models. 1 Introduction Gaussian process (GP) models have gained considerable interest in the Neural Computation Community (see e.g.[I, 2, 3, 4] ) in recent years. Being non-parametric models by construction their theoretical understanding seems to be less well developed compared to simpler parametric models like neural networks. We are especially interested in developing theoretical approaches which will at least give good approximations to generalization errors when the number of training data is sufficiently large. In this paper we present a step in this direction which is based on a statistical mechanics approach. In contrast to most previous applications of statistical mechanics to learning theory we are not limited to the so called "thermodynamic" limit which would require a high dimensional input space. Our work is very much motivated by recent papers of Peter Sollich (see e.g. [5]) who presented a nice approximate treatment of the Bayesian generalization error of G P regression which actually gives good results even in the case of a one dimensional input space. His method is based on an exact recursion for the generalization error of the regression problem together with approximations that decouple certain correlations of random variables. Unfortunately, the method seems to be limited because the exact recursion is an artifact of the Gaussianity of the regression model and is not available for other cases such as classification models. Second, it is not clear how to assess the quality of the approximations made and how one may systematically improve on them. Finally, the calculation is (so far) restricted to a full Bayesian scenario, where a prior average over the unknown data generating function simplifies the analysis. Our approach has the advantage that it is more general and may also be applied to other likelihoods. It allows us to compute other quantities besides the generalization error. Finally, it is possible to compute the corrections to our approximations. 2 Regression with Gaussian processes To explain the Gaussian process scenario for regression problems [2], we assume that we observe corrupted values y(x) E R of an unknown function f(x) at input points x E Rd. If the corruption is due to independent Gaussian noise with variance u2, the likelihood for a set of m example data D = (Y(Xl), ... , Y(Xm))) is given by exp (_ "'~ (Yi-f(Xi))2) L...tz=l 20-2 P(Dlf) = (2)"'. (1) 27rU 2 where Yi == Y(Xi). The goal of a learner is to give an estimate of the function f(x). The available prior information is that f is a realization of a Gaussian process (random field) with zero mean and covariance C(x,x') = E[f(x)f(x')], where E denotes the expectation over the Gaussian process. We assume that the prediction at a test point x is given by the posterior expectation of f(x), i.e. by j(x) = E{f(x)ID} = Ef(x)P(Dlf) Z (2) where the partition function Z normalises the posterior. Calling the true data generating function 1* (in order to distinguish it from the functions over which we integrate in the expectations) we are interested in the learning curve, i.e. the generalization (mean square) error averaged over independent draws of example data, i.e. Cg = [((f*(x) - j(x))2}]D as a function of m, the sample size. The brackets [ .. . ]D denote averages over example data sets where we assume that the inputs Xi are drawn independently at random from a density p(x). ( ... ) denotes an average over test inputs drawn from the same density. Later, the same brackets will also be used for averages over several different test points and for joint averages over test inputs and test outputs. 3 The Partition Function As typical of statistical mechanics approaches, we base our analysis on the averaged "free energy" [-In Z]D where the partition function Z (see Eq. (2)) is Z = EP(Dlf). (3) [In Z]D serves as a generating function for suitable posterior averages. The concrete application to Cg will be given in the next section. The computation of [In Z]D is based on the replica trick In Z = limn-+o znn-1 , where we compute [zn]D for integer n and perform the continuation at the end. Introducing a set of auxiliary integration variables Zka in order to decouple the squares, we get where En denotes the expectation over the n times replicated GP measure. In general, it seems impossible to perform the average over the data. Using a cumulant expansion, an infinite series of terms would be created. However one may be tempted to try the following heuristic approximation: If (for fixed function I), the distribution of f(Xk) - Yk was a zero mean Gaussian, we would simply end up with only the second cumulant and J II dZka ((72 '" 2) 27r exp -2 L...J Zka X k,a k,a (5) x En exp (-~ L L ZkaZkb((fa(X) - y)(fb(X) - yn) . a,b k Although such a reasoning may be justified in cases where the dimensionality of inputs x is large, the assumption of approximate Gaussianity is typically (in the sense of the prior measure over functions I) completely wrong for small dimensions. Nevertheless, we will argue in the next section that the expression Eq. (5) Uustified by a different reason) is a good approximation for large sample sizes and nonzero noise level. We will postpone the argument and proceed to evaluate Eq. (5) following a fairly standard recipe: The high dimensional integrals over Zka are turned into low dimensional integrals by the introduction of" order-parameters" 'T}ab = 2:;;'=1 ZkaZkb so that [Z"[D ~ ! ll. d, •• exp (-~a' ~> .. + G({,})) x x Enexp (-~ L'T}ab((fa(X) - y)(fb(X) - y)) a,b (6) where eG({I)}) = J TIk,a d~;Q TIa::;b J (2:;;'=1 ZkaZkb - 'T}ab). We expect that in the limit of large sample size m, the integrals are well approximated by the saddle-point method. To perform the limit n -t 0, we make the assumption that the saddle-point of the matrix 'T} is replica symmetric, i.e. 'T}ab = 'T} for a f:. band 'T}aa = 'T}o. After some calculations we arrive at [lnZJD = (72'T}O m m'T} 'T} 0 2 --2- + "2 In ('T}o - 'T}) + 2('T}o _ 'T}) - "2(E f (x) (7) + In E exp [- 'T}o ; 'T} ((f(x) - y)2)] - ~ (In(27rm) - 1) into which we have to insert the values 'T} and 'T}o that make the right hand side an extremum. We have defined a new auxiliary (translated) Gaussian measure over functions by (8) where ¢ is a functional of f. For a given input distribution it is possible to compute the required expectations in terms of sums over eigenvalues and eigenfunctions of the covariance kernel C(x, x'). We will give the details as well as the explicit order parameter equations in a full version of the paper. 4 Generalization error To relate the generalization error with the order parameters, note that in the replica framework (assuming the approximation Eq. (5)) we have cg+u2 -l~fIId"'ab exp [-~u2L"'aa+G({"'})l x a~b a X a a En exp (-~ L "'ab((fa(X) - y)(fb(X) - y))) "'12 a,b which by a partial integration and a subsequent saddle point integration yields m", 2 Cg = - ( )2 U . "'0 -", (9) It is also possible to compute other error measures in terms of the order parameters like the expected error on the (noisy) training data defined by (10) The "true" training error which compares the prediction with the data generating function 1* is somewhat more complicated and will be given elsewhere. 5 Why (and when) the approximation works Our intuition behind the approximation Eq. (5) is that for sufficiently large sample size, the partition function is dominated by regions in function space which are close to the data generating function 1* such that terms like ((fa(x) - y)(fb(X) - y)) are typically small and higher order polynomials in fa(x) - Y generated by a cumulant expansion are less important. This intuition can be checked self consistently by estimating the omitted terms perturbatively. We use the following modified partition function [Zn('\)]D = f II d;;a e - u22 Ek,a z~a En [ exp (i'\ L Zka(fa (Xk) - y) k,a k,a 1 ~ ,\2 L L ZkaZkb((fa(X) - y)(fb(X) - y)))] (11) a,b k D which for ,\ = 1 becomes the "true" partition function, whereas Eq. (5) is obtained for ,\ = O. Expanding in powers of ,\ (the terms with odd powers vanish) is equivalent to generating the cumulant expansion and subsequently expanding the non-quadratic terms down. Within the saddle-point approximation, the first nonzero correction to our approximation of [In Z] is given by ,\4 C"'O 2-:n",)2 (u2(C(X, x)) + (C(x, X)F2(X)) - (C(x, x')F(x)F(x')) +",(C(x, x')C(x, x")C(x', x")) - ",(C(x, x)C2 (x, x'))) +~ (-: + ~) ((C2(x,x)) - (C2(x, x'))) ). (12) C(x,x') = E°{f(x)f(x')} denotes the covariance with respect to the auxiliary measure and F(x) == f*(x) - (C(x,XI)f*(X")). The significance of the individual terms as m -+ 00 can be estimated from the following scaling. We find that ('T}o 'T}) = O(m) is a positive quantity, whereas 'T} = O(m) is negative. C(x, x') = O(l/m). Using these relations, we can show that Eq. (12) remains finite as m -+ 00, whereas the leading approximation Eq. (7) diverges with m. We have not (yet) computed the resulting correction to ego However, we have studied the somewhat simpler error measure e' == ~ l:dE{(f*(Xi) - f(Xi))2ID}]D which can be obtained from a derivative of [In Z]D with respect to a 2 . It equals the error of a Gibbs algorithm (sampling from the posterior) on the training data. We can show that the correction to e' is typically by a factor of O(l/m) smaller than the leading term. However, our approximation becomes worse with decreasing noise variance a 2 . a = 0 is a singular case for which (at least for some GPs with slowly decreasing eigenvalues) it can be shown that our approximation for eg decays to zero at the wrong rate. For small values of a, a -+ 0, we expect that higher order terms in the perturbation expansion will become relevant. 6 Results We compare our analytical results for the error measures eg and et with simulations of GP regression. For simplicity, we have chosen periodic processes of the form f(x) = v'2l:n (an cos(27rnx) + bn sin(27rnx)) for x E [0,1] where the coefficients an, bn are independent Gaussians with E{ a~} = E{b~} = An. This choice is convenient for analytical calculations by the orthogonality of the trigonometric functions when we sample the Xi from a uniform density in [0,1]. The An and the translation invariant covariance kernel are related by c(x - y) == C(x,y) = 2l:n An cos(27rn(x - y)) and An = J; c(x) cos(27rnx) dx. We specialise on the (periodic) RBF kernel c(x) = l:~-oo exp [-(x - k)2/2l2] with l = 0.1. For an illustration we generated learning curves for two target functions f* as displayed in Fig. 1. One function is a sine-wave f*(x) = J2Al sin(27rx) while the other is a random realisation from the prior distribution. The symbols in the left panel of Fig. 1 represent example sets of fifty data points. The data points have been obtained by corruption of the target function with Gaussian noise of variance a 2 = 0.01. The right panel of Fig. 1 shows the data averaged generalization and training errors eg, et as a function of the number m of example data. Solid curves display simulation results while the results of our theory Eqs. (9), (10) are given by dashed lines. The training error et converges to the noise level a 2 • As one can see from the pictures our theory is very accurate when the number m of example data is sufficiently large. While the generalization error e 9 differs initially, the asymptotic decay is the same. 7 The Bayes error We can also apply our method to the Bayesian generalization error (previously approximated by Peter Sollich [5]). The Bayes error is obtained by averaging the generalization error over "true" functions f* drawn at random from the prior distribution. Within our approach this can be achieved by an average of Eq. (7) over f*. The resulting order parameter equations and their relation to the Bayes error turn out be identical to Sollich's result. Hence, we have managed to re-derive his approximation within a broader framework from which also possible corrections can be obtained. Data generating function Learning curves 10° 10- 1 • Ct 10-2 f (x) -1 10-3 10-4 0 0.2 0.4 0.6 0.8 1 0 50 100 150 200 x Number m of example data 10° 10- 1 10-2 . f (x) 10-3 -1 10-4 -2 0 1 x Number m of example data Figure 1: The left panels show two data generating functions f*(x) and example sets of 50 data points. The right panels display the corresponding averaged learning curves. Solid curves display simulation results for generalization and training errors Cg, Ct. The results of our theory Eqs. (9), (10) are given by dashed lines. 8 Future work At present, we extend our method in the following directions: • The statistical mechanics framework presented in this paper is based on a partition function Z which can be used to generate a variety of other data averages for posterior expectations. An obvious interesting quantity is given by the sample fluctuations of the generalization error which gives confidence intervals on Cg. • Obviously, our method is not restricted to a regression model (in this case however, all resulting integrals are elementary) but can also be directly generalized to other likelihoods such as the classification case [4, 6]. A further application to Support Vector Machines should be possible. • The saddle-point approximation neglects fluctuations of the order parameters. This may be well justified when m is sufficiently large. It is possible to improve on this by including the quadratic expansion around the saddlepoint. • Finally, one may criticise our method as being of minor relevance to practical applications, because our calculations require the knowledge of the unknown function 1* and the density of the inputs x. However, Eqs. (9) and (10) show that important error measures are solely expressed by the order parameters 'fI and 'flo. Hence, estimating some error measures and the posterior variance at the data points empirically would allow us to predict values for the order parameters. Those in turn could be used to make predictions for the unknown generalization error. Acknowledgement This work has been supported by EPSRC grant GR/M8160l. References [1] D. J. C. Mackay, Gaussian Processes, A Replacement for ral Networks, NIPS tutorial 1997, May be obtained http://wol.ra.phy.cam.ac.uk/pub/mackay/. Neufrom [2] C. K. I. Williams and C. E. Rasmussen, Gaussian Processes for Regression, in Neural Information Processing Systems 8, D. S. Touretzky, M. C. Mozer and M. E. Hasselmo eds., 514-520, MIT Press (1996). [3] C. K. I. Williams, Computing with Infinite Networks, in Neural Information Processing Systems 9, M. C. Mozer, M. I. Jordan and T. Petsche, eds., 295-30l. MIT Press (1997). [4] D. Barber and C. K. I. Williams, Gaussian Processes for Bayesian Classification via Hybrid Monte Carlo, in Neural Information Processing Systems 9, M . C. Mozer, M. I. Jordan and T. Petsche, eds., 340-346. MIT Press (1997). [5] P. Sollich, Learning curves for Gaussian processes, in Neural Information Processing Systems 11, M. S. Kearns, S. A. Solla and D. A. Cohn, eds. 344 - 350, MIT Press (1999). [6] L. Csata, E. Fokoue, M. Opper, B. Schottky, and O. Winther. Efficient approaches to Gaussian process classification. In Advances in Neural Information Processing Systems, volume 12, 2000.
|
2000
|
133
|
1,791
|
Sequentially fitting "inclusive" trees for inference in noisy-OR networks Brendan J. Frey!, Relu Patrascul , Tommi S. Jaakkola\ Jodi Moranl 1 Intelligent Algorithms Lab University of Toronto www.cs.toronto.edu/~frey 2 Computer Science and Electrical Engineering Massachusetts Institute of Technology Abstract An important class of problems can be cast as inference in noisyOR Bayesian networks, where the binary state of each variable is a logical OR of noisy versions of the states of the variable's parents. For example, in medical diagnosis, the presence of a symptom can be expressed as a noisy-OR of the diseases that may cause the symptom - on some occasions, a disease may fail to activate the symptom. Inference in richly-connected noisy-OR networks is intractable, but approximate methods (e.g., variational techniques) are showing increasing promise as practical solutions. One problem with most approximations is that they tend to concentrate on a relatively small number of modes in the true posterior, ignoring other plausible configurations of the hidden variables. We introduce a new sequential variational method for bipartite noisyOR networks, that favors including all modes of the true posterior and models the posterior distribution as a tree. We compare this method with other approximations using an ensemble of networks with network statistics that are comparable to the QMR-DT medical diagnostic network. 1 Inclusive variational approximations Approximate algorithms for probabilistic inference are gaining in popularity and are now even being incorporated into VLSI hardware (T. Richardson, personal communication). Approximate methods include variational techniques (Ghahramani and Jordan 1997; Saul et al. 1996; Frey and Hinton 1999; Jordan et al. 1999), local probability propagation (Gallager 1963; Pearl 1988; Frey 1998; MacKay 1999a; Freeman and Weiss 2001) and Markov chain Monte Carlo (Neal 1993; MacKay 1999b). Many algorithms have been proposed in each of these classes. One problem that most of the above algorithms suffer from is a tendency to concentrate on a relatively small number of modes of the target distribution (the distribution being approximated). In the case of medical diagnosis, different modes correspond to different explanations of the symptoms. Markov chain Monte Carlo methods are usually guaranteed to eventually sample from all the modes, but this may take an extremely long time, even when tempered transitions (Neal 1996) are (a) " , \ Q(x) (b) , \ fiE. .. x .. x Figure 1: We approximate P(x) by adjusting the mean and variance of a Gaussian, Q(x}. (a) The result of minimizing D(QIIP) = 2:" Q(x)log(Q(x)/ P(x», as is done for most variational methods. (b) The result of minimizing D(PIIQ) = 2:" P(x)log(P(x)/Q(x». used. Preliminary results on local probability propagation in richly connected networks show that it is sometimes able to oscillate between plausible modes (Murphy et al. 1999; Frey 2000), but other results also show that it sometimes diverges or oscillates between implausible configurations (McEliece et al. 1996). Most variational techniques minimize a cost function that favors finding the single, most massive mode, excluding less probable modes of the target distribution (e.g., Saul et al. 1996; Ghahramani and Jordan 1997; Jaakkola and Jordan 1999; Frey and Hinton 1999; Attias 1999). More sophisticated variational techniques capture multiple modes using substructures (Saul and Jordan 1996) or by leaving part of the original network intact and approximating the remainder (Jaakkola and Jordan 1999). However, although these methods increase the number of modes that are captured, they still exclude modes. Variational techniques approximate a target distribution P(x) using a simpler, parameterized distribution Q(x) (or a parameterized bound). For example, P(diseasel, disease2,'" , diseaseNlsymptoms) may be approximated by a factorized distribution, Ql (diseasedQ2 (disease2) .. ·QN(diseaseN). For the current set of observed symptoms, the parameters of the Q-distributions are adjusted to make Q as close as possible to P. A common approach to variational inference is to minimize a relative entropy, D(QIIP) = l: Q(x) log ~~:~. x (1) Notice that D(QIIP):j:. D(PIIQ). Often D(QIIP) can be minimized with respect to the parameters of Q using iterative optimization or even exact optimization. To see how minimizing D(QIIP) may exclude modes of the target distribution, suppose Q is a Gaussian and P is bimodal with a region of vanishing density between the two modes, as shown in Fig. 1. If we minimize D(QIIP) with respect to the mean and variance of Q, it will cover only one of the two modes, as illustrated in Fig.1a. (We assume the symmetry is broken.) This is because D(QIIP) will tend to infinity if Q is nonzero in the region where P has vanishing density. In contrast, if we minimize D(PIIQ) = Ex P(x)log(P(x)/Q(x)) with respect to the mean and variance of Q, it will cover all modes, since D(PIIQ) will tend to infinity if Q vanishes in any region where P is nonzero. See Fig. lb. For many problems, including medical diagnosis, it is easy to argue that it is more important that our approximation include all modes than exclude non plausible configurations at the cost of excluding other modes. The former leads to a low number of false negatives, whereas the latter may lead to a large number of false negatives (concluding a disease is not present when it is). Figure 2: Bipartite Bayesian network. 8kS are observed, dns are hidden. 2 Bipartite noisy-OR networks Fig. 2 shows a bipartite noisy-OR Bayesian network with N binary hidden variables d = (d1, . . . , dN) and K binary observed variables s = (Sl, . . . , S K) . Later, we present results on medical diagnosis, where dn = 1 indicates a disease is active, dn = 0 indicates a disease is inactive, Sk = 1 indicates a symptom is active and Sk = 0 indicates a symptom is inactive. The joint distribution is K N P(d, s) = [II P(skld )] [II P(dn)]. (2) k=l n=l In the case of medical diagnosis, this form assumes the diseases are independent. 1 Although some diseases probably do depend on other diseases, this form is considered to be a worthwhile representation of the problem (Shwe et al., 1991). The likelihood for Sk takes the noisy-OR form (Pearl 1988). The probability that symptom Sk fails to be activated (Sk = 0) is the product of the probabilities that each active disease fails to activate Sk: N P(Sk = Old) = PkO II p~~. (3) n=l Pkn is the probability that an active dn fails to activate Sk. PkO accounts for a "leak probability". 1-PkO is the probability that symptom Sk is active when none of the diseases are active. Exact inference computes the distribution over d given a subset of observed values in s. However, if Sk is not observed, the corresponding likelihood (node plus edges) may be deleted to give a new network that describes the marginal distribution over d and the remaining variables in s. So, we assume that we are considering a subnetwork where all the variables in s are observed. We reorder the variables in s so that the first J variables are active (Sk = 1, 1 ~ k ~ J) and the remaining variables are inactive (Sk = 0, J + 1 ~ k ~ K). The posterior distribution can then be written J N K N N P(dls) ocP(d,s) = [II(1-PkoIIp~~)][ II (pkoIIp~~)][IIP(dn)J. (4) k=l n=l k=J+1 n=l n=l Taken together, the two terms in brackets on the right take a simple, product form over the variables in d. So, the first step in inference is to "absorb" the inactive 1 However, the diseases are dependent given that some symptoms are present. variables in s by modifying the priors P(dn) as follows: K d pI (dn) = anP(dn) ( II Pkn) n, (5) k=J+l where an is a constant that normalizes P/(dn). Assuming the inactive symptoms have been absorbed, we have J N N P(dls) ex [II (1 - PkO II p~~)] [II P/(dn)]. (6) k=l n=l n=l The term in brackets on the left does not have a product form. The entire expression can be multiplied out to give a sum of 2J product forms, and exact "QuickS core" inference can be performed by combining the results of exact inference in each of the 2J product forms (Heckerman 1989). However, this exponential time complexity makes large problems, such as QMR-DT, intractable. 3 Sequential inference using inclusive variational trees As described above, many variational methods minimize D(QIIP), and find approximations that exclude some modes of the posterior distribution. We present a method that minimizes D(PIIQ) sequentially - by absorbing one observation at a time - so as to not exclude modes of the posterior. Also, we approximate the posterior distribution with a tree. (Directed and undirected trees are equivalent we use a directed representation, where each variable has at most one parent.) The algorithm absorbs one active symptom at a time, producing a new tree by searching for the tree that is closest - in the D(PIIQ) sense - to the product of the previous tree and the likelihood for the next symptom. This search can be performed efficiently in O(N2 ) time using probability propagation in two versions of the previous tree to compute weights for edges of a new tree, and then applying a minimum-weight spanning-tree algorithm. Let Tk(d) be the tree approximation obtained after absorbing the kth symptom, Sk = 1. Initially, we take To(d) to be a tree that decouples the variables and has marginals equal to the marginals obtained by absorbing the inactive symptoms, as described above. Interpreting the tree Tk-l (d) from the previous step as the current "prior" over the diseases, we use the likelihood P(Sk = lid) for the next symptom to obtain a new estimate of the posterior: N A(dls1 , ... ,Sk) ex Tk-l (d)P(Sk = lid) = Tk-l (d) (1 - PkO II p~~) n=l = Tk-l(d) - TLl(d), (7) where TLI (d) = Tk-l (d) (PkO TI;;=l p~~) is a modified tree. Let the new tree be Tk(d) = TIn Tk(dnld1rk(n)), where 7rk (n) is the index of the parent of dn in the new tree. The parent function 7rk (n) and the conditional probability tables of Tk (d) are found by minimizing (8) Ignoring constants, we have D(FkIITk) = - 2:Fk(dls1, ... ,Sk) log Tk (d) d = - 2: (Tk- 1 (d) - TLl(d)) log (II Tk(dnld1fk(n))) d n = - 2: (2:(Tk-l(d) - TLl(d)) 10gTk(dnld1fk(n))) n d = - 2:(2: 2: (Tk-l(dn,d1fk(n)) T~_l(dn,d1fk(n))) 10gTk(dnld1fk(n))). n dn d"k(n) For a given structure (parent function 7l"k(n)), the optimal conditional probability tables are (9) where f3n is a constant that ensures Ldn Tk(dnld1fk (n)) = 1. This table is easily computed using probability propagation in the two trees to compute the two marginals needed in the difference. The optimal conditional probability table for a variable is independent of the parentchild relationships in the remainder of the network. So, for the current symptom, we compute the optimal conditional probability tables for all N(N - 1)/2 possible parent-child relationships in O(N2) time using probability propagation. Then, we use a minimum-weight directed spanning tree algorithm (Bock 1971) to search for the best tree. Once all of the symptoms have been absorbed, we use the final tree distribution, TJ(d) to make inferences about d given s. The order in which the symptoms are absorbed will generally affect the quality of the resulting tree (Jaakkola and Jordan 1999), but we used a random ordering in the experiments reported below. 4 Results on QMR-DT type networks Using the structural and parameter statistics of the QMR-DT network given in Shwe et al. (1991) we simulated 30 QMR-DT type networks with roughly 600 diseases each. There were 10 networks in each of 3 groups with 5, 10 and 15 instantiated active symptoms. We chose the number of active symptoms to be small enough that we can compare our approximate method with the exact QuickScore method (Heckerman 1989). We also tried two other approximate inference methods: local probability propagation (Murphy et al. 1999) and a variational upper bound (Jaakkola and Jordan 1999). For medical diagnosis, an important question is how many most probable diseases n' under the approximate posterior must be examined before the most probable n diseases under the exact posterior are found. Clearly, n ~ n' ~ N. An exact inference algorithm will give n' = n, whereas an approximate algorithm that mistakenly ranks the most probable disease last will give n' = N. For each group of networks and each inference method, we averaged the 10 values of n' for each value of n. The left column of plots in Fig. 3 shows the average of n' versus n for 5, 10 and 15 active symptoms. The sequential tree-fitting method is closest to optimal (n' = n) in all cases. The right column of plots shows the "extra work" caused by the excess number of diseases n' - n that must be examined for the approximate methods. 'c 300 250 200 150 100 , 50 450 400 350 5 positive findings '-:.' 100 150 200 250 300 10 positive findings 'c 300 , 250 : 200 ; 150 ; 100 : 50 '/' ~~~50~~ 1O~ 0 -'~50~2= 00~2~ 50~ 30~ 0 -= 35~ 0 -4~00~4~ 50~ 550 soo ~ --- --450 I 400 , 350 , 'c 3OO 250 200 150 100 50 , 15 positive findings 00 50 100 150 200 250 300 350 400 450 500 550 c I 5 positive findings 100 ,,' r , 50 100 150 200 250 10 positive findings ~ ~ 350 ,-----~~-~~~-~~-~~_____, 300 : '~J .. 1 ' , , 250 I c 200 : I , ·c 150 I 6:] -- Ub tree - - pp 50 100 150 200 250 300 350 400 450 15 positive findings 6:] -- ub tree - - pp 00 50 100 150 200 250 300 350 400 450 500 550 Figure 3: Comparisons of the number of most probable diseases n' under the approximate posterior that must be examined before the most probable n diseases under the exact posterior are found. Approximate methods include the sequential tree-fitting method presented in this paper (tree), local probability propagation (pp) and a variational upper bound (ub). 5 Summary Noisy-OR networks can be used to model a variety of problems, including medical diagnosis. Exact inference in large, richly connected noisy-OR networks is intractable, and most approximate inference algorithms tend to concentrate on a small number of most probable configurations of the hidden variables under the posterior. We presented an "inclusive" variational method for bipartite noisy-OR networks that favors including all probable configurations, at the cost of including some improbable configurations. The method fits a tree to the posterior distribution sequentially, i.e., one observation at a time. Results on an ensemble of QMR-DT type networks show that the method performs better than local probability propagation and a variational upper bound for ranking most probable diseases. Acknowledgements. We thank Dale Schuurmans for discussions about this work. References H. Attias 1999. Independent factor analysis. Neural Computation 11:4, 803- 852. F. Bock 1971. An algorithm to construct a minimum directed spanning tree in a directed network. Developments in Operations Research, Gordon and Breach, New York, 29-44. W. T. Freeman and Y. Weiss 2001. On the fixed points of the max-product algorithm. To appear in IEEE Transactions on Information Theory, Special issue on Codes on Graphs and Iterative Algorithms. B. J. Frey 1998. Graphical Models for Machine Learning and Digital Communication. MIT Press, Cambridge, MA. B. J. Frey 2000. Filling in scenes by propagating probabilities through layers and into appearance models. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE Computer Society Press, Los Alamitos, CA. B. J. Frey and G. E. Hinton 1999. Variational learning in non-linear Gaussian belief networks. Neural Computation 11:1, 193-214. R. G. Gallager 1963. Low-Density Parity-Check Codes. MIT Press, Cambridge, MA. Z. Ghahramani and M. I. Jordan 1997. Factorial hidden Markov models. Machine Learning 29, 245- 273. D. Heckerman 1989. A tractable inference algorithm for diagnosing multiple diseases. Proceedings of the Fifth Conference on Uncertainty in Artificial Intelligence. T. S. Jaakkola and M. I. Jordan 1999. Variational probabilistic inference and the QMR-DT network. Journal of Artificial Intelligence Research 10, 291-322. M. I. Jordan, Z. Ghahramani, T. S. Jaakkola and L. K. Saul 1999. An introduction to variational methods for graphical models. In M. I. Jordan (ed) Learning in Graphical Models, MIT Press, Cambridge, MA. D. J. C MacKay 1999a. Good error-correcting codes based on very sparse matrices. IEEE Transactions on Information Theory 45:2, 399-431. D. J. C MacKay 1999b. Introduction to Monte Carlo methods. In M. I. Jordan (ed) Learning in Graphical Models, MIT Press, Cambridge, MA. R. J. McEliece, E. R. Rodemich and J.-F. Cheng 1996. The turbo decision algorithm. Proceedings of the 33rd Allerton Conference on Communication, Control and Computing, Champaign-Urbana, IL. K. P. Murphy, Y. Weiss and M. I. Jordan 1999. Loopy belief propagation for approximate inference: An empirical study. Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann, San Francisco, CA. R. M. Neal 1993. Probabilistic inference using Markov chain Monte Carlo methods. Technical Report CRG-TR-93-1, Computer Science, University of Toronto. R. M. Neal 1996. Sampling from multimodal distributions using tempered transitions. Statistics and Computing 6, 353-366. L. K. Saul, T. Jaakkola and M. I. Jordan 1996. Mean field theory for sigmoid belief networks. Journal of Artificial Intelligence Research 4, 61-76. L. K. Saul and M. I. Jordan 1996. Exploiting tractable substructures in intractable networks. In D. Touretzky, M. Mozer, and M. Hasselmo (eds) Advances in Neural Information Processing Systems 8. MIT Press, Cambridge, MA. M. Shwe, B. Middleton, D. Heckerman, M. Henrion, E. Horvitz, H. Lehmann and G. Cooper 1991. Probabilistic diagnosis using a reformulation of the INTERNIST-1/QMR knowledge base I. The probabilistic model and inference algorithms. Methods of Information in Medicine 30, 241- 255.
|
2000
|
134
|
1,792
|
Whence Sparseness? C. van Vreeswijk Gatsby Computational Neuroscience Unit University College London 17 Queen Square, London WCIN 3AR, United Kingdom Abstract It has been shown that the receptive fields of simple cells in VI can be explained by assuming optimal encoding, provided that an extra constraint of sparseness is added. This finding suggests that there is a reason, independent of optimal representation, for sparseness. However this work used an ad hoc model for the noise. Here I show that, if a biologically more plausible noise model, describing neurons as Poisson processes, is used sparseness does not have to be added as a constraint. Thus I conclude that sparseness is not a feature that evolution has striven for, but is simply the result of the evolutionary pressure towards an optimal representation. 1 Introduction Recently there has been an resurgence of interest in using optimal coding strategies to 'explain' the response properties of neuron in the primary sensory areas [1]. Notably this approach was used Olshausen and Field [2] to infer the receptive field of simple cells in the primary visual cortex. To arrive at the correct results however, they had to add sparseness of activity as an extra constraint. Others have shown that similar results are obtained if one assumes that the neurons represent independent components of natural stimuli [3]. The fact that these studies need to impose an extra constraint suggests strongly that the subsequent processing of the stimuli uses either sparseness or independence of the neuronal activity. It is therefore highly important to determine whether these constraints are really necessary. Here it will be argued that the necessity of the sparseness constraint in these models is due to modeling the noise in the system incorrectly. Modeling the noise in a biologically more plausibly way leads to a representation of the input in which the sparseness of the activity naturally follows from the optimality of the representation. 2 Gaussian Noise Several approaches have been used to find an output that represents the input optimally, for example, minimizing the square difference between the input and its reconstruction. In this paper I will concentrate on a different definition of optimality, I require that the mutual information between the input and output is maximized. If the number of output units is at least equal to the dimensionality of the input space a perfect reconstruction of the input is possible, unless there is noise in the system. So for an (over)-complete representation optimal encoding only makes sense in the presence of noise. It is important to note that the optimal solution depends on the model of noise that is taken, even if one takes the limit where the noise goes to zero. Thus it is important to have an adequate noise model. Most optimal coding schemes describe the neuronal output by an input-dependent mean to which Gaussian noise is added. This is, roughly speaking, also the implicit assumption in an optimization procedure in which the mean square reconstruction error is minimized, but it is also often used explicitly when the the mutual information is maximized. It is instructive to see, in the latter case, why one needs to impose extra constraints to obtain unambiguous results: Assume the input s has dimension Ni and is drawn from a distribution p(s). There are No ~ Ni output neurons whose rates r satisfy r=Ws+ue, (1) where e is a No dimensional univariate Gaussian with zero mean, P€(e) = (2'71,)-No /2 exp( -eT e/2) (the superscript T denotes the transpose). The task is to find the No x Ni matrix Wm that maximizes the mutual information 1M between r and s, defined by [4] IM(r, s) = / dr / dsp(r, s){log[p(rls)l-IOg[/ ds'p(r, s'm. (2) Here p( r, s) is the joint probability distribution of rand sand p( r Is) is the conditional probability of r, given s. It is immediately clear that replacing W by eW with e > 1 increases the mutual information by effectively reducing the noise by a factor l/e. Thus maximal mutual information is obtained as the rates become infinite. Thus, to get sensible results, a constraint has to be placed on the rates. A natural constraint in this framework is a constraint on the average square rates r, < rT r >= Nom. Here I have used < ... > to denote the average over the noise and inputs and R~ > u 2 is the mean square rate. Under this constraint, however, the optimal solution is still vastly degenerate. Namely if W M is a matrix that gives the maximum mutual information, for any unitary matrix U (UTU = 1), UW m will also maximize 1M. This is straightforward to show. For r = W mS + ue the mutual information is given by IM(r,s;Wm) = /dr /dSP(S)p€(r-:;Vm s ) {IOg[p€(r-:;Vms)]_ log [/ ds' p(s')P€ (r - :mS')]} , (3) where I have used IM(r, S; W) to denote the mutual information when the matrix W is used. In the case where r satisfies r = UW mS + ue the mutual information is given by equation 3, with Wm replaced by UWm. Changing variables from r to r' = UTr and using I det(U)1 = 1, this can be rewritten as / dr' / dsp(s)P€ (U r'-;VmS) {lOg [P€ (U r'-;VmS)] log [/ ds'p(s')P€ (U r' - :;Vms')]} . (4) Becausepe(e) is a function ofeTe only, Pe (Ue) = pe(e), and therefore Im(r, 8; UWm) = Im(r, 8, W m). In other words, because we have assumed a model in which the noise is described by independent Gaussians, or generally the distribution of the noise e is a function of eT e only, the mutual information is invariant to unitary transformations of the output. Clearly, this degeneracy is a result of the particular choice of the noise statistics and unlikely to survive when we try to account for biologically observed noise more accurately. In the latter case it may well happen that the degeneracy is broken in such a way that maximizing the mutual information with a constraint on the average rates is itself sufficient to obtain a sparse representation. 3 Poisson Noise To obtain a robust insight in this issue, it is important that the system can be treated analytically. The desire for biologically plausibility of the system should therefore be balanced by the desire to keep it simple. Ubiquitous features found in electrophysiological experiments likely to be of importance are (see for example [5]): i) Neurons transmit information through spikes. ii) Consecutive inter-spike intervals are at best weakly correlated. iii) With repeated presentation of the stimulus the variance in the number of spikes a neuron emits over a given period varies nearly linearly with the mean number of emitted spikes. A simple model that captures all these features of the biological system is the Poisson process [6]. I will thus consider a system in which the neurons are described by such a process. The general model is as follows: The inputs are given by an Ni dimensional vector 8 drawn from a distribution p(8). These give rise to No inputs u into the cells, which satisfy u = W 8, where W is the coupling matrix. The inputs u are transformed into rates through a transfer function g, ri = g( Ui) . The output of the network is observed for a time T. Optimal encoding of the input is defined by the maximal mutual information between the spikes the neurons emit and the stimulus. Let ni be the total number of spikes for neuron i and n the No dimensional array of spike counts, then p(nlr) = TIihT)ni exp(-riT)/ni!. Optimal coding is achieved by determining W m such that Wm = argmaxw(IM(n, 8; W)). (5) As before there is need for a constraint on W so that solutions with infinite rates are excluded. Whereas with Gaussian channels fixing the mean square rates is the most natural choice for the constraint, for Poissonian neurons it is more natural to fix the mean number of emitted spikes, L:i < ri >= NoRo. By rescaling time we can, without loss of generality, assume that Ro = 1. 4 A Simple Example The simplest example in which we can consider whether such systems will lead a sparse representation is a system with a single neuron and a 1 dimensional input, which is uniformly distributed between 0 and 1. Assume that the unit has output rate r = ° when the input satisfies s < I - P and rate l/p if s > I - p. Because the neuron is either 'on' or 'off' , maximal information about its state can be obtained by checking whether it fired either one or more spikes or did not fire at all in the time-window over which the neuron was observed. If the neuron is 'on', the probability that it does not spike in a time T is 1- exp( -T /p), otherwise it is 1. Thus the probability distribution is p(O, s) = I - e-T/P0(s - I + p), p(l+, s) = e-T/Pi0(s - 1+ p), (6) 0.5 0.4 0.3 Pm 0.2 0.1 00 2 T 3 4 5 Figure 1: Pm, the value of P that maximizes the mutual information as function of the measuring time-window T. where I have used p(l+, s) to denote the probability of 1 or more spikes and an input s. The mutual information satisfies IM(n, SiP) = p(l - e-T / p ) log(l - e-T / p ) - pe-T / p 10g(P) (1 - p(l - e- T / p )) log(l - p(l - e-T / p )). (7) Figure 1 shows Pm, the value of p that maximizes the mutual information, as a function of the time T over which the neuron is observed. For small T, Pm is small, this reflects the fact that the reliability of the neuron is increased if the rate in the 'on' state (l/p) is made maximal. For large T, Pm approaches 1/2, the value for which the entropy of the output rate T is maximized. We thus see a trade-off between the reliability which wants to make p as small as possible, and the capacity, which pushes p to 112. For time intervals that are smaller than or on the order of the mean 1: inter-spike interval the former dominates and leads to an optimal solution in which the neuron is, with a high probability, quiet, or, with a low probability, fires vigorously. Thus in this system the neurons fire sparsely if the measuring time is sufficiently short. 5 A More Interesting Example Somewhat closer to the problem of optimal encoding in VI, but still tractable, is the following example. A two-dimensional input 8 is drawn from a distribution p( 8) given by p(Sl, S2) = ~ (8(sl)e-ls21/2 + e-ISll/28(S2)) . (8) This input is encoded by four neurons, the inputs into these neurons are given by ( U1 ) _ 1 (cos(¢) cS10'ns((~)) ( SS21 ) , (9) U2 I cos(¢)I + I sin(¢)I - sin(¢) 'I' U3 = -U1, and U4 = -U2. The rates Ti satisfy Ti = (Ui)+ == (Ui + IUil)/2, the threshold linear function. Due to the symmetry of the problem, rotation by a multiple of 7l' /2 leads to the same rates, up to a permutation. Thus we can restrict ourselves to 0 ::::; ¢ < 7l' /2. It is straightforward to show that Li < ni >= 4, and that sparseness of the activity, here defined by Li« nf > - < ni >2)/ < ni >2, has its minimum for ¢ = 7l'/4, and o.7.------,.-----r-----r-------, 0.6 0.5 0.2 0.1 00~----~0~.4,----r0~.8,--~--~1r..2~----r1.6 Figure 2: Mutual information 1M as function of </J, for T = 1 (solid line), for T = 2 (dashed line), and T = 3 (dotted line). maximum for </J = O. Some straightforward algebra shows that the mutual information is given by IM(n, 8; </J) = ,/,T + log(l + T) - ~ C: T) n [log C: T) + 1 ( I COS(</J) I )nl (1 ISin(</J) In) l+T I COS(</J) I + I sin(</J) I og + COS(</J) + 1 ~ T (I COS(~;~~~)s~n(</J)I) n log (1 + I :~:~:? In) ] (10) Figure 2 shows the 1M as a function of </J for different values of T. For all values of T the mutual information is maximal when </J = 0 and minimal for 7f I 4. For large T the angular dependent part of 1M scales as liT. So this dependence becomes negligible if the output is observed for a long time. Yet as in the previous example, for relatively short time-windows, optimal coding automatically leads to sparse coding. 6 Optimal Coding in VI Finally we turn to optimal encoding of images in the striate cortex. To study this I consider a system in with a large number, K, of natural images. The intensity of pixel j (1 ::; j ::; Ni ) of image I\; is given by S j (1\;) . These images induce an input Ui into neuron i (1 ::; i ::; No) given by Ui(l\;) = L WijSj(I\;), (11) j which lead to firing rates Ti(l\;) which satisfy Ti(l\;) = f3- 1 10g(1 + e,t3ui(It)). Here I have used as smoothened version of of the threshold linear function (which is recovered in the limit f3 -t 00) to ensure that its derivative with respect to Wij is continuous. The neurons fire in a Poissonian manner, so that for image I\; the probability P( n, 1\;) of observing ni spikes for neuron i is given by (12) We want to choose the matrix such W the mutual information 1M between the image K, and the spike count for the different neurons, given by 1 IM(n,K,j W) = 2: K 2:p(n,K,j W) [log(p(n,K,j W) -log(p(nj W))] (13) n K is maximized. Obviously an analytic solution is out of the question, but one may want to try to approach the optimal solution by gradient assent, using for the derivative of the mutual information. Here the derivative of p( n, K,) is given by ap(n,K,)/awi,j = sj(K,)(l - ef3Ti(K)(ndri - T)p(n,K,) . The constraint on the rates, K- 1 LK Li ri(K,) = No is incorporated by adding this function with a Lagrange multiplier to the objective function. Unfortunately the gradient assent approach is impractical, since the summation over n scales exponentially with No. In any case, one may want to use stochastic gradient assent to avoid getting trapped in local minima. But to do stochastic gradient assent it is sufficient to obtain a unbiased estimate of aIM/aWij. Denoting this derivative by aIM/aWij = Ln Fij(n), where Fij has the obvious meaning, one can rewrite the derivative of the mutual information as aIM -2:-( ) Fij(n) --pn-aWij n p(n) (15) provided that p( n) is non-zero for every n for which p( nj W) i= O. An unbiased estimate of aIM /awij (denoted by aiM /awij ) is obtained by taking L aIM =.!. 2: Fij (n(£)) aWij L £=1 p(n(£)) , (16) where the L vectors n(£) are drawn independently from the distribution p(n). Conjecturing that Fij (n) is roughly proportional to p( nj W), I set p( n) = p( nj W) to obtain the best estimate of aIM/aWij for fixed L. Drawing from p(nj W) can be done in a computationally cheap way by first randomly picking K, and then draw from p(n, K" W), which factorizes. In the simulation of the system I have used the natural image collection from [7]. From each of the approximately 4000 images a 10 x 10 patch. These were preprocessed by subtracting the mean and whitening the image. To reduce the effects of tlle comers a circular region was then extracted from the images. This resulted in an input which has 80 pixel intensities per image. These pixel intensities were encoded by 160 neurons. The coupling matrix was initialized by drawing tits components independently from a Gaussian distribution and rescaling the matrix to normalize < ri >. The time-window T was chosen to be T = 0.5, and (3 was gradually increased from its initial value of (3 = 1 to (3 = 10. The coupling matrix was updated using € = 10-4 , and L = 10. Figure 3 shows the some of the receptive fields that were obtained after the system had approached the fixed point, e.i. tlle running average of the mutual information no longer increased. These receptive fields look rather similar to those obtained from simple cells in the striate cortex. However a more thorough analysis of these receptive field and the sparseness of the rate distribution still has to be undertaken. " • If A - .': e'I ~ 'I -2 r:l , l\ '1 ~\, l!~ r ~ ~-.#' •• Il, "~ =P1 • ~ \ ~ I ~. c·_.....;.,:J.~L· :::::I - I ~ ,~ " • ~ , ~ Figure 3: Forty examples of receptive fields that show clear Gabor like structure. 7 Discussion I have shown why optimal encoding using Gaussian channels naturally leads to highly degenerate solutions necessitating extra constraints. Using the biologically more plausible noise model which describes the neurons by Poisson processes naturally leads to a sparse representation, when optimal encoding is enforced, for two analytically tractable models. For a model of the striate cortex Poisson statistics also leads to a network in which the receptive fields of the neurons mimic those of VI simple cells, without the need to impose sparseness. This leads to the conclusion that sparseness is not an independent constraint that is imposed by evolutionary pressure, but rather is a consequence of optimal encoding. References [1] Baddeley, R., Hancock, P., and Foldiak:, P. (eds.) (2000) Information Theory and the Brain. (Cambridge University Press, Cambridge). [2] Olshausen, B.A. and Field, D.J. (1996) Nature 381:607; (1998) Vision Research 37:3311. [3] Bell, A.j. and Sejnowski, T.J. (1997) Vision Res. 37:3327; van Hateren, lH. and van der Schaaf, A. (1998) Proc. R. Soc. Lond. 265:359. [4] Cover, T.M. and Thomas, lA. (1991) Information Theory (Whiley and Sons, New York). [5] Richmond, B.J., Optican, L.M., and Spitzer, H. (1990) J. NeurophysioI. 64:351; Rolls, E.T., Critchley, H.D., and Treves, A. (1996) J. Neurophysiol. 75: 1982; Dean, A.F. (1981) Exp. Brain. Res. 44:437. [6] Smith, w.L. (1951) Biometrica 46:1. [7] van Hateren, lH. and van der Schaaf, A. (1998) Proc.R.Soc.Lond. B 265:359-366.
|
2000
|
135
|
1,793
|
Periodic Component Analysis: An Eigenvalue Method for Representing Periodic Structure in Speech Lawrence K. Saul and Jont B. Allen {lsaul,jba}@research.att.com AT&T Labs, 180 Park Ave, Florham Park, NJ 07932 Abstract An eigenvalue method is developed for analyzing periodic structure in speech. Signals are analyzed by a matrix diagonalization reminiscent of methods for principal component analysis (PCA) and independent component analysis (ICA). Our method-called periodic component analysis (1l"CA)-uses constructive interference to enhance periodic components of the frequency spectrum and destructive interference to cancel noise. The front end emulates important aspects of auditory processing, such as cochlear filtering, nonlinear compression, and insensitivity to phase, with the aim of approaching the robustness of human listeners. The method avoids the inefficiencies of autocorrelation at the pitch period: it does not require long delay lines, and it correlates signals at a clock rate on the order of the actual pitch, as opposed to the original sampling rate. We derive its cost function and present some experimental results. 1 Introduction Periodic structure in the time waveform conveys important cues for recognizing and understanding speech[I]. At the end of an English sentence, for example, rising versus falling pitch indicates the asking of a question; in tonal languages, such as Chinese, it carries linguistic information. In fact, early in the speech chain-prior to the recognition of words or the assignment of meaning-the auditory system divides the frequency spectrum into periodic and non-periodic components. This division is geared to the recognition of phonetic features[2]. Thus, a voiced fricative might be identified by the presence of periodicity in the lower part of the spectrum, but not the upper part. In complicated auditory scenes, periodic components of the spectrum are further segregated by their fundamental frequency [3 ]. This enables listeners to separate simultaneous speakers and explains the relative ease of separating male versus female speakers, as opposed to two recordings of the same voice[4]. The pitch and voicing of speech signals have been extensively studied[5]. The simplest method to analyze periodicity is to compute the autocorrelation function on sliding windows of the speech waveform. The peaks in the autocorrelation function provide estimates of the pitch and the degree of voicing. In clean wideband speech, the pitch of a speaker can be tracked by combining a peak-picking procedure on the autocorrelation function with some form of smoothing[6], such as dynamic programming. This method, however, does not approach the robustness of human listeners in noise, and at best, it provides an extremely gross picture of the periodic structure in speech. It cannot serve as a basis for attacking harder problems in computational auditory scene analysis, such as speaker separation[7], which require decomposing the frequency spectrum into its periodic and non-periodic components. The correlogram is a more powerful method for analyzing periodic structure in speech. It looks for periodicity in narrow frequency bands. Slaney and Lyon[8] proposed a perceptual pitch detector that autocorrelates multichannel output from a model of the auditory periphery. The auditory model includes a cochlear filterbank and periodicity-enhancing nonlinearities. The information in the correlogram is summed over channels to produce an estimate of the pitch. This method has two compelling features: (i) by measuring autocorrelation, it produces pitch estimates that are insensitive to phase changes across channels; (ii) by working in narrow frequency bands, it produces estimates that are robust to noise. This method, however, also has its drawbacks. Computing multiple autocorrelation functions is expensive. To avoid aliasing in upper frequency bands, signals must be correlated at clock rates much higher than the actual pitch. From a theoretical point of view, it is unsatisfying that the combination of information across channels is not derived from some principle of optimality. Finally, in the absence of conclusive evidence for long delay lines (~1O ms) in the peripheral auditory system, it seems worthwhile-for both scientists and engineers-to study ways of detecting periodicity that do not depend on autocorrelation. In this paper, we develop an eigenvalue method for analyzing periodic structure in speech. Our method emulates important aspects of auditory processing but avoids the inefficiencies of autocorrelation at the pitch period. At the same time, it is highly robust to narrowband noise and insensitive to phase changes across channels. Note that while certain aspects of the method are biologically inspired, its details are not intended to be biologically realistic. 2 Method We develop the method in four stages. These stages are designed to convey the main technical ideas of the paper: (i) an eigenvalue method for combining and enhancing weakly periodic signals; (ii) the use of Hilbert transforms to compensate for phase changes across channels; (iii) the measurement of periodicity by efficient sinusoidal fits; and (iv) the hierarchical analysis of information across different frequency bands. 2.1 Cross-correlation of critical bands Consider the multichannel output of a cochlear filterbank. If the input to this filterbank consists of noisy voiced speech, the output will consist of weakly periodic signals from different critical bands. Can we combine these signals to enhance the periodic signature of the speaker's pitch? We begin by studying a mathematical idealization of the problem. Given n real-valued signals, {xi(t)}i=l' what linear combination s(t) = Li WiXi(t) maximizes the periodic structure at some fundamental frequency fa, or equivalently, at some pitch period T = 1/ fa? Ideally, the linear combination should use constructive interference to enhance periodic components of the spectrum and destructi ve interference to cancel noise. We measure the periodicity of the combined signal by the cost function: ( ) _ Lt Is(t + T) - s(tW c w, T Lt Is(t)12 with s(t) = L WiXi(t). (1) Here, for simplicity, we have assumed that the signals are discretely sampled and that the period T is an integer multiple of the sampling interval. The cost function c (w , T) measures the normalized prediction error, with the period T acting as a prediction lag. Expanding the right hand side in terms of the weights Wi gives: Lij Wi wjAij(7) e(W,7) = L ' ij WiwjBij (2) where the matrix elements Aij (7) are determined by the cross-correlations, Aij (7) = L [Xi(t)Xj(t) + Xi(t + 7)Xj(t + 7) - Xi(t)Xj (t + 7) - Xi (t + 7)Xj (t)] , t and the matrix elements Bij are the equal-time cross-correlations, Bij = Lt Xi (t)Xj(t). Note that the denominator and numerator of eq. (2) are both quadratic forms in the weights Wi. By the Rayleigh-Ritz theorem of linear algebra, the weights Wi minimizing eq. (2) are given by the eigenvector of the matrix B-1 A( 7) with the smallest eigenvalue. For fixed 7, this solution corresponds to the global minimum of the cost function e( w, 7). Thus, matrix diagonalization (or simply computing the bottom eigenvector, which is often cheaper) provides a definitive answer to the above problem. The matrix diagonalization which optimizes eq. (2) is reminiscent of methods for principal component analysis (PCA) and independent component analysis (IcA)[9]. Our methodwhich by analogy we call periodic component analysis (1I'cA)-uses an eigenvalue principle to combine periodicity cues from different parts of the frequency spectrum. 2.2 Insensitivity to phase The eigenvalue method in the previous section has one obvious shortcoming: it cannot compensate for phase changes across channels. In particular, the real-valued linear combination 8(t) = Li WiX;(t) cannot align the peaks of signals that are (say) 11'/2 radians out of phase, even though such an alignment-prior to combining the signals-would significantly reduce the normalized prediction error in eq. (1). A simple extension of the method overcomes this shortcoming. Given real-valued signals, {x;(t)} , we consider the analytic signals, {x;(t)}, whose imaginary components are computed by Hilbert transforms[lO]. The Fourier series of these signals are related by: X;(t) = L D:k COS(Wkt + ¢k) k ¢:::::> x;(t) = L D:ke;(Wkt+¢k). k (3) We now reconsider the problem of the previous section, looking for the linear combination of analytic signals, 8(t) = L; w;x;(t), that minimizes the cost function in eq. (1). In this setting, moreover, we allow the weights W; to be complex so that they can compensate for phase changes across channels. Eq. (2) generalizes in a straightforward way to: L;j wiwjA;j(7) e(w ,7)= L * ' ;j w; wjB;j (4) where A( 7) and B are Hermitian matrices with matrix elements A;j(7) = L [x;(t)Xj(t) + x;(t + 7)Xj(t + 7) - x;(t)Xj(t + 7) - x;(t + 7)Xj(t)] t and B;j = Lt x;(t)Xj(t). Again, the optimal weights W; are given by the eigenvector corresponding to the smallest eigenvalue of the matrix B- 1 A ( 7). (Note that all the eigenvalues of this matrix are real because the matrix is Hermitian.) Our analysis so far suggests a simple-minded approach to investigating periodic structure in speech. In particular, consider the following algorithm for pitch tracking. The first step of the algorithm is to pass speech through a cochlear filterbank and compute analytic signals, Xi (t), via Hilbert transforms. The next step is to diagonalize the matrices B-1 A( T) on sliding windows of Xi(t) over a range of pitch periods, T E [Tmin, Tmaxl. The final step is to estimate the pitch periods by the values of T that minimize the cost function, eq. (1), for each sliding window. One might expect such an algorithm to be relatively robust to noise (because it can zero the weights of corrupted channels), as well as insensitive to phase changes across channels (because it can absorb them with complex weights). Despite these attractive features, the above algorithm has serious deficiencies. Its worst shortcoming is the amount of computation needed to estimate the pitch period, T. Note that the analysis step requires computing n2 cross-correlation functions, Lt xi(t)x j (t+T), and diagonalizing the n x n matrix, B- 1 A(T). This step is unwieldy for three reasons: (i) the burden of recomputing cross-correlations for different values of T, (ii) the high sampling rates required to avoid aliasing in upper frequency bands, and (iii) the poor scaling with the number of channels, n. We address these concerns in the following sections. 2.3 Extracting the fundamental Further signal processing is required to create multichannel output whose periodic structure can be analyzed more efficiently. Our front end, shown in Fig. 1, is designed to analyze voiced speech with fundamental frequencies in the range fa E [fmin, fmax] , where fmax < 2fmin. The one-octave restriction on fa can be lifted by considering parallel, overlapping implementations of our front end for different frequency octaves. The stages in our front end are inspired by important aspects of auditory processing[lO]. Cochlear filtering is modeled by a Bark scale filterbank with contiguous passbands. Next, we compute narrowband envelopes by passing the outputs of these filters through two nonlinearities: half-wave rectification and cube-root compression. These operations are commonly used to model the compressive unidirectional response of inner hair cells to movement along the basilar membrane. Evidence for comparison of envelopes in the peripheral auditory system comes from experiments on comodulation masking release[ll]. Thus, the next stage of our front end creates a multichannel array of signals by pairwise multiplying envelopes from nearby parts of the frequency spectrum. Allowed pairs consist of any two envelopes, including an envelope with itself, that might in principle contain energy at two consecutive harmonics of the fundamental. Multiplying these harmonics-just like multiplying two sine waves-produces intermodulation distortion with energy at the sum and difference frequencies. The energy at the difference frequency creates a signature of "residue" pitch at fa. The energy at the sum frequency is removed by bandpass filtering to frequencies [fmin'!max] and aggressively downsampling to a sampling rate fs = 4fmin. Finally, we use Hilbert transforms to compute the analytic signal in each channel, which we call Xi(t). In sum, the stages of the front end create an array of bandlimited analytic signals, Xi (t), that-while derived from different parts of the frequency spectrum-have energy concentrated at the fundamental frequency, fa. Note that the bandlimiting of these channels to frequencies [fmin, fmax] where fmax <2fmin removes the possibility that a channel contains periodic energy at any harmonic other than the fundamental. In voiced .speech, this has the effect that periodic channels contain noisy sine waves with frequency fa. speech waveform cochlear filterbank half-wave Q rectification; x8. cube-root /'0, compression X pairwise multiplication ~----~ ~----~ bandlimiting; downsampling =: Figure 1: Signal processing in the front end. compute analytic signals How can we combine these "baseband" signals to enhance the periodic signature of a speaker's pitch? The nature of these signals leads to an important simplification of the problem. As opposed to measuring the autocorrelation at lag T, as in eq. (1), here we can measure the periodicity of the combined signal by a simple sinusoidalfit. Let ~ = 27r fo/ f. denote the phase accumulated per sample by a sine wave with frequency fo at sampling rate f., and let S (t) = I:i Wi Xi (t) denote the combined signal. We measure the periodicity of the combined signal by ( A) _ I:t Is(t + 1) s(t)ei~ 12 _ I:ij wiWjAij(~) C w,u , I:t Is(t)12 I:ij Wi WjBij (5) where the matrix B is again formed by computing equal-time cross-correlations, and the matrix A(~) has elements Aij(~) = L [x;(t)Xj(t)+X;(t+l)Xj(t+l)-e-i~x;(t)Xj(t+l)-ei~x;(t+l)xj(t)] . t For fixed ~, the optimal weights Wi are given by the eigenvector corresponding to the smallest eigenvalue of the matrix B- 1 A( ~). Note that optimizing the cost function in eq. (5) over the phase, ~, is equivalent to optimizing over the fundamental frequency, fo, or the pitch period, T. The structure of this cost function makes it much easier to optimize than the earlier measure of periodicity in eq. (1). For instance, the matrix elements Aij(~) depend only on the equal-time and onesample-lagged cross-correlations, which do not need to be recomputed for different values of ~. Also, the channels Xi(t) appearing in this cost function are sampled at a clock rate on the order of fo, as opposed to the original sampling rate of the speech. Thus, the few cross-correlations that are required can be computed with many fewer operations. These properties lead to a more efficient algorithm than the one in the previous section. The improved algorithm, working with baseband signals, estimates the pitch by optimizing eq. (5) over w and ~ for sliding windows of Xi (t). One problem still remains, however-the need to invert and diagonalize large numbers of n x n matrices, where the number of channels, n, may be prohibitively large. This final obstacle is removed in the next section. 2.4 Hierarchical analysis We have developed a fast recursive algorithm to locate a good approximation to the minimum of eq. (5). The recursive algorithm works by constructing and diagonalizing 2 x 2 matrices, as opposed to the n x n matrices required for an exact solution. Our approximate algorithm also provides a hierarchical analysis of the frequency spectrum that is interesting in its own right. A sketch of the algorithm is given below. The base step of the recursion estimates a value ~i for each individual channel by minimizing the error of a sinusoidal fit: (6) The minimum of the right hand side can be computed by setting its derivative to zero and solving a quadratic equation in the variable ei~ •. If this minimum does not correspond to a legitimate value of fo E [fmin, fmax], the ith channel is discarded from future analysis, effectively setting its weight Wi to zero. Otherwise, the algorithm passes three arguments to a higher level of the recursion: the values of ~i and Ci (~i)' and the channel Xi (t) itself. The recursive step of the algorithm takes as input two auditory "substreams", Sl(t) and su(t), derived from "lower" and "upper" parts of the frequency spectrum, and returns as output a single combined stream, s(t) = WISI(t) + wusu(t). In the first step Figure 2: Measures of pitch (fo) and periodicity (e l ) in nested regions of the frequency spectrum. The nodes in this tree describe periodic structure in the vowel luI from 4001080 Hz. The nodes in the first (bottom) layer describe periodicity cues in individual channels; the nodes in the kth layer measure cues integrated across 2k - l channels. of the recursion, the substreams correspond to individual channels Xi (t), while in the kth step, they correspond to weighted combinations of 2k - l channels. Associated with the substreams are phases, ~I and ~t" corresponding to estimates of fo from different parts of the frequency spectrum. The combined stream is formed by optimizing eq.(5) over the two-component weight vector, W = [WI , wu ]. Note that the eigenvalue problem in this case involves only a 2 x 2 matrix, as opposed to an n x n matrix. The value of ~ determines the period of the combined stream; in practice, we optimize it over the interval defined by ~I and ~u. Conveniently, this interval tends to shrink at each level of the recursion. The algorithm works in a bottom-up fashion. Channels are combined pairwise to form streams, which are in turn combined pairwise to form new streams. Each stream has a pitch period and a measure of periodicity computed by optimizing eq. (5). We order the channels so that streams are derived from contiguous (or nearly contiguous) parts of the frequency spectrum. Fig. 2 shows partial output of this recursive procedure for a windowed segment of the vowel luI. Note how as one ascends the tree, the combined streams have greater periodicity and less variance in their pitch estimates. This shows explicitly how the algorithm integrates information across narrow frequency bands of speech. The recursive output also suggests a useful representation for studying problems, such as speaker separation, that depend on grouping different parts of the spectrum by their estimates of fo. 3 Experiments We investigated the performance of our algorithm in simple experiments on synthesized vowels. Fig. 3 shows results from experiments on the vowel luI. The pitch contours in these plots were computed by the recursive algorithm in the previous section, with f min = 80 Hz, fmax = 140 Hz, and 60 ms windows shifted in 10 ms intervals. The solid curves show the estimated pitch contour for the clean wideband waveform, sampled at 8 kHz. The left panel shows results for filtered versions of the vowel, bandlimited to four different frequency octaves. These plots show that the algorithm can extract the pitch from different parts of the frequency spectrum. The right panel shows the estimated pitch contours for the vowel in 0 dB white noise and four types of -20 dB bandlimited noise. The signal-to-noise ratios were computed from the ratio of (wideband) speech energy to noise energy. The white noise at 0 dB presents the most difficulty; by contrast, the bandlimited noise leads to relatively few failures, even at -20 dB. Overall, the algorithm is quite robust to noise and filtering. (Note that the particular frequency octaves used in these experiments had no special relation to the filters in our front end.) The pitch contours could be further improved by some form of smoothing, but this was not done for the plots shown. bandhmlted speech 130l--~--~-r=======il wide band 125 120 0250 - 0500 Hz 0500 - 1000 Hz 1000 - 2000 Hz 2000 - 4000 Hz 90 ~-----=-0 '-::.2---=-0 .':4---=0 .':6 ---::' 0 .-=B -----: time (sec) noisy speech 1 30L-~--r=========il clean o dB, white noise -20 dB, 0250 - 0500 Hz 125 120 -20 dB, 0500 - 1000 Hz -20 dB, 1000 - 2000 Hz -20 dB, 2000 - 4000 Hz 90 L----::' o .-=2 ---::' o .~ 4 ---::' 0 .~ 6 --~ 0 .B~-----: time (sec) Figure 3: Tracking the pitch of the vowel lui in corrupted speech. 4 Discussion Many aspects of this work need refinement. Perhaps the most important is the initial filtering into narrow frequency bands. While narrow filters have the ability to resolve individual harmonics, overly narrow filters-which reduce all speech input to sine waves~o not adequately differentiate periodic versus noisy excitation. We hope to replace the Bark scale filterbank in Fig. 1 by one that optimizes this tradeoff. We also want to incorporate adaptation and gain control into the front end, so as to improve the performance in non stationary listening conditions. Finally, beyond the problem of pitch tracking, we intend to develop the hierarchical representation shown in Fig. 2 for harder problems in phoneme recognition and speaker separation[7]. These harder problems seem to require a method, like ours, that decomposes the frequency spectrum into its periodic and non-periodic components. References [1] Stevens, K. N. 1999. Acoustic Phonetics. MIT Press: Cambridge, MA. [2] Miller, G. A. and Nicely, P. E. 1955. An analysis of perceptual confusions among some English consonants. Journal of the Acoustical Society of America 27, 338- 352. [3] Bregman, A. S. 1994. Auditory Scene Analysis: the Perceptual Organization of Sound. MIT Press: Cambridge, MA. [4] Brokx, J. P. L. and Noteboom, S. G. 1982. Intonation and the perceptual separation of simultaneous voices. J. Phonetics 10, 23- 26. [5] Hess, W. 1983. Pitch Determination of Speech Signals: Algorithms and Devices. SpringerVerlag. [6] Talkin, D. 1995. A Robust Algorithm for Pitch Tracking (RAPT). In Kleijn, W. B. and Paliwal, K. K. (Eds.), Speech Coding and Synthesis, 497- 518. Elsevier Science. [7] Roweis, S. 2000. One microphone source separation. In Tresp, v., Dietterich, T., and Leen, T. (Eds.), Advances in Neural Information Processing Systems 13. MIT Press: Cambridge, MA. [8] Slaney, M. and Lyon, R. F. 1990. A perceptual pitch detector. In Proc. ICASSP-90, 1, 357- 360. [9] Molgedey, L. and Schuster, H. G. 1994. Separation of a mixture of independent signals using time delayed correlations. Phys. Rev. Lett. 72(23), 3634-3637. [10] Hartmann, W. A. 1997. Signals, Sound, and Sensation. Springer-Verlag. [11] Hall, J. w., Haggard, M. P., and Fernandes, M. A. 1984. Detection in noise by spectro-temporal pattern analysis. J. Acoust. Soc. Am. 76,50- 56.
|
2000
|
136
|
1,794
|
Partially Observable SDE Models for Image Sequence Recognition Tasks Javier R. Movellan Institute for Neural Computation University of California San Diego Paul Mineiro Department of Cognitive Science University of California San Diego R. J. Williams Department of Mathematics University of California San Diego Abstract This paper explores a framework for recognition of image sequences using partially observable stochastic differential equation (SDE) models. Monte-Carlo importance sampling techniques are used for efficient estimation of sequence likelihoods and sequence likelihood gradients. Once the network dynamics are learned, we apply the SDE models to sequence recognition tasks in a manner similar to the way Hidden Markov models (HMMs) are commonly applied. The potential advantage of SDEs over HMMS is the use of continuous state dynamics. We present encouraging results for a video sequence recognition task in which SDE models provided excellent performance when compared to hidden Markov models. 1 Introduction This paper explores a framework for recognition of image sequences using partially observable stochastic differential equations (SDEs). In particular we use SDE models of low-power non-linear RC circuits with a significant thermal noise component. We call them diffusion networks. A diffusion network consists of a set of n nodes coupled via a vector of adaptive impedance parameters>' which are tuned to optimize the network's behavior. The temporal evolution of the n nodes defines a continuous stochastic process X that satisfies the following Ito SDE: dX(t) = Ji-(X(t), >')dt + a dB(t), X(O) '" v, (1) (2) where v represents the (stochastic) initial conditions and B is standard Brownian motion. The drift is defined by a non-linear RC charging equation 1 ( 1 ) Ji-j(X(t),>') = ~j +Xj(t) - -Xj(t) , for j = 1,··· ,n, Kj Pj (3) where Ji-j is the drift of unit j, i.e., the ]fh component of Ji-. Here Xj is the internal potential at node j, Kj > 0 is the input capacitance, Pj the node resistance, ~j a Dlstllhutl(lll.l t+dt SDE Modeis ODE Models Hidden Markov Models Figure 1: An illustration of the differences between stochastic differential equation models (SDE), ordinary differential equation models (ODE) and Hidden Markov Models (HMM). In ODEs the the state dynamics are continuous and deterministic. In SDEs the state dynamics are continuous and stochastic. In HMMs the state dynamics are discrete and probabilistic. constant input current to the unit, Xj the net electrical current input to the node, n Xj(t) = L Wj,m rp(Xm(t)), for j = 1,··· ,n, (4) m=l 1 rp(x) = 1 ' for all x E JR, + e- X (5) where rp the input-output characteristic amplification, and l/wj,m is the impedance between the output Xm and the node j. Intuition for equation (3) can be achieved by thinking of it as the limit of a discrete time stochastic difference equation, X(t + ~t) = X(t) + /-£(X(t), A)~t + u-/MZ(t), (6) where the Z(t) is an n-dimensional vector ofindependent standard Gaussian random variables. For a fixed state at time t there are two forces controlling the change in activation: the drift, which is deterministic, and the dispersion which is stochastic (see Figure 1). This results in a distribution of states at time t + ~t. As ~t goes to zero, the solution to the difference equation (6) converges to the diffusion process defined in (3). Figures 1 and 2 shows the relationship between SDE models and other approaches in the neural network and the stochastic filtering literature. The main difference between ODE models, like standard recurrent neural networks, and SDE models is that the first has deterministic dynamics while the second has probabilistic dynamics. The two approaches are similar in that the states are continuous. The main difference between HMMs and SDEs is that the first have discrete state dynamics while the second have continuous state dynamics. The main similarity is that both are probabilistic. Kalman filters are linear SDE models. If the impedance matrix is symmetric and the network is given enough time to approximate stochastic equilibrium, diffusion network behave like continuous Boltzmann machines (Ackley, Hinton & Sejnowski, 1985). If the network is discretized in state and time it becomes a standard HMM. Finally, if the dispersion constant is set to zero the network behaves like a deterministic recurrent neural network. In order to use of SDE models we need a method for finding the likelihood and the likelihood gradient of observed sequences. Kalman-Bucy Filters Linear Dynamics Boltzmann Machines Stochastic Equilibrium Diffusion Networks Zer Noise Recurrent Neural Networks Discrete pace and Time Hidden Markov Models Figure 2: Relationship between diffusion filters and other approaches in the neural network and stochastic filtering literature. 2 Observed sequence likelihoods We regard the first d components of an SDE model as observable and denote them by O. The last n - d components are denoted by H and named unobservable or hidden. Hidden components are included for modeling non-Markovian dependencies in the observable components. Let no, nh be the outcome spaces for the observable and hidden processes. Let n = no x nh the joint outcome space. Here each outcome W is a continuous path w : [0, T] --t IRn. For each wEn, we write w = (wo, Wh), where Wo represents the observable dimensions of the path and Wh the hidden dimensions. Let Q>'(A) represent the probability that a network with parameter A generates paths in the set A, Q~(Ao) the probability that the observable components generate paths in Ao and Q~(Ah) the probability that the hidden components generate paths in Ah. To apply the familiar techniques of maximum likelihood and Bayesian estimation we use as reference the probability distribution of a diffusion network with zero drift, Le., the paths generated by this network are Brownian motion scaled by u. We denote such reference distribution as R, its observable and hidden components as Ro, Rh. Using Girsanov's theorem (Karatzas & Shreve, 1991, p. 303) we have that L~(wo) = ~~: (wo) = f L~,h(wo,Wh) dRh(wh), Wo E no, (7) Oh where dQ>' {1 rT 1 rT } L~,h(W) = dR (w) = exp u 2 io f..L(w(t), A) . dw(t) - 2u2 io 1f..L(w(t), A)1 2dt . (8) The first integral in (8) is an Ito stochastic integral, the second is a standard Lebesgue integral. The term L~ is a Radon-Nikodym derivative that represents the probability density of Q~ with respect to Ro. For a fixed path Wo the term L~(wo) is a likelihood function of A that can be used for Maximum likelihood estimation. To obtain the likelihood gradient, we differentiate (7) which yields \7>.logL~(wo) = f L~lo(Wh Iwo)\7>.logL~,h(wo,wh) dRh(wh), (9) Oh where and [A is the joint innovation process [A(t,w) = W(t) - W(O) -lot p,(w(u), A) duo 2.1 Importance sampling (10) (11) (12) (13) (14) The likelihood of observed paths (7), and the gradient of the likelihood (9) require averaging with respect to the distribution of hidden paths Rh. We estimate these averages using an importance sampling in the space of sample paths. Instead of sampling from Rh we sample from a distribution that weights more heavily regions where L~ h is large. Each sample is then weighted by the density of the sampling distributi~n with respect to Rh. This weighting function is commonly known as the importance function in the Monte-Carlo literature (Fishman, 1996, p. 257). In particular for each observable path Wo we let the sampling distribution S~,wo be the probability distribution generated by a diffusion network with parameter A which has been forced to exhibit the path Wo over the observable units. The approach reminiscent of the technique of teacher forcing from deterministic neural networks. In practice, we generate Li.d. sampled hidden paths {h(i)}~l from S~,wo by numerically simulating a diffusion network with the observable units forced to exhibit the path Wo these hidden paths are then weighted by the density of S~ ,wo with respect to Rh, which acts as a Monte-Carlo importance function In practice we have obtained good results with m in the order of 20, i.e., we sample 20 hidden sequences per observed sequence. One interesting property of this approach is that the sampling distributions S~,wo change as learning progresses, since they depend on A. Figure 3 shows results of a computer simulation in which a 2 unit network was trained to oscillate. We tried an oscillation pattern because of its relevance for the application we explore in a later section, which involves recognizing sequences of lip movements. The figure shows the "training" path and a couple of sample paths, one obtained with the u parameter set to 0, and one with the parameter set to 0.5. '" =~~ o 0 . 1 0 . 2 0 . 3 0 . 4 0 . 5 0 .6 0 . 7 O . B 0 . 9 ., J~ o 0 . 1 0 . 2 0 . 3 0 . 4 0 . 5 0 .6 0 . 7 O .B 0 . 9 ... ~~ o 0 . 1 0 . 2 0 . 3 0 . 4 0 . 5 0 .6 0 . 7 O .B 0 . 9 ., Figure 3: Training a 2 unit network to maximize the likelihood of a sinusoidal path. The top graph shows the training path. It consists of two sinusoids out of phase each representing the activation of the two units in the network. The center graph shows a sample path obtained after training the network and setting a = 0, i.e., no noise. The bottom graph shows a sample path obtained with a = 0.5. 3 Recognizing video sequences In this section we illustrate the use of SDE models on a sequence classification task of reasonable difficulty with a body of realistic data. We chose this task since we know of SDE models used for tracking problems but know of no SDE models used for sequence recognition tasks. The potential advantage of SDEs over more established approaches such as HMMs is that they enforce continuity constraints, an aspect that may be beneficial when the actual signals are better described using continuous state dynamics. We compared a diffusion network approach with classic hidden Markov model approaches. We used Tulipsl (Movellan, 1995), a database consisting of 96 movies of 9 male and 3 female undergraduate students from the Cognitive Science Department at the University of California, San Diego. For each student two sample utterances were taken for each of the digits "one" through "four". The database is available at http://cogsci.ucsd.edu. We compared the performance of diffusion networks and HMMs using two different image processing techniques (contours and contours plus intensity) in combination with 2 different recognition engines (HMMs and diffusion networks). The image processing was performed by Luettin and colleagues (Luettin, 1997). They employ point density models, where each lip contour is represented by a set of points; in this case both the inner and outer lip contour are represented, corresponding to Luettin's double contour model. The dimensionality of the representation of the contours is reduced using principal component analysis. For the work presented here 10 principal components were used to approximate the contour, along with a scale parameter which measured the pixel distance between the mouth corners; associated with each of these 11 parameters was a corresponding "delta component", the left-hand temporal difference of the component (defined to be zero for the first frame). In this manner a total of 22 parameters were used to represent lip contour information for each still frame. These 22 parameters were represented using diffusion networks with 22 observation units, one per parameter value. We also tested the performance of a representation that used intensity information in addition to contour shape information. This approach used 62 parameters, which were represented using diffusion networks with 62 observation units. Approach Best HMM, shape information only Best diffusion network, shape information only Untrained human subjects Best HMM, shape and intensity information Best diffusion network, shape and intensity information Trained human subjects Correct Generalization 82.3% 85.4% 89.9% 90.6% 91.7% 95.5% Table 1: Average generalization performance on the Tulips1 database. Shown in order are the performance of the best performing HMM from (Luettin et al., 1996), which uses only shape information, the best diffusion network obtained using only shape information, the performance of untrained human subjects (Movellan, 1995), the HMM from Luettin's thesis (Luettin 1997) which uses both shape and intensity information, the best diffusion network obtained using both shape and intensity information, and the performance of trained human lipreaders (Movellan, 1995). We independently trained 4 diffusion networks, to approximate the distributions of lip-contour trajectories of each of the four words to be recognized, i.e., the first network was trained with examples of the word "one", and the last network with examples of the word "four". Each network had the same number of nodes, and the drift of each network was given by (3) with K.i = 1, ~ = 0 for all units, and ~ being part of the adaptive vector A. Thus, A = (~1'··· ,~n,Wl , 1,Wl,2,···Wn , n)/. The number of hidden units was varied from one to 5. We obtained optimal results with 4 hidden units. The initial state of the hidden units was set to (1, ... ,1) with probability 1, and u was set to 1 for all networks. The diffusion network dynamics were simulated using a forward-Euler technique, i.e., equation (1) is approximated in discrete time using (6). In our simulations we set tl.t = 1/30 seconds, the time between video frame samples. Each diffusion network was trained with examples of one of the 4 digits using the cost function ~(A) = L log i~(y(i)) ~aIAI2, i (16) where {y(i)} are samples from the desired empirical distribution Po and a is the strength of a Gaussian prior on the network parameters. Best results were obtained with diffusion networks with 4 hidden units. The log-likelihood gradients were estimated using the importance sampling approach with m = 20, i.e., we generated 20 hidden sample paths per observed path. With this number of samples training took about 10 times longer with diffusion networks than with HMMs. At test time, computation of the likelihood estimates was very fast and could have been done in real time using a fast Pentium II. The generalization performance was estimated using a jacknife (one-out) technique: we trained on all subjects but one, which is used for testing. The process is repeated leaving a different subject out every time. Results are shown in Table 1. The table includes HMM results reported by Luettin (1997), who tried a variety of HMM architectures and reported the best results obtained with them. The only difference between Luettin's approach and our approach is the recognition engine, which was a bank of HMMs in his case and a bank of diffusion networks in our case. If anything we were at a disadvantage since the image representations mentioned above were optimized by Luettin to work best with HMMs. In all cases the best diffusion networks outperformed the best HMMs reported in the literature using exactly the same visual preprocessing. In all cases diffusion networks outperformed HMMs. The difference in performance was not large. However obtaining even a 1 % increment in performance on this database is very difficult. 4 Discussion While we presented results for a video sequence recognition task, the same framework can be used for tasks such as sequence recognition, object tracking and sequence generation. Our work was inspired by the rich literature on continuous stochastic filtering and stochastic neural networks. The idea was to combine the versatility of recurrent neural networks and the well known advantages of stochastic modeling approaches. The continuous-time nature of the networks is convenient for data with dropouts or variable sample rates, since the models we use define all the finite dimensional distributions. The continuous-state representation is well suited to problems involving inference about continuous unobservable quantities, as in visual tracking tasks. Since these networks enforce continuity constraints in the observable paths they may not have the well known problems encountered when HMMs are used as generative models of continuous sequences. We have presented encouraging results on a realistic sequence recognition task. However more work needs to be done, since the database we used is relatively small. At this point the main disadvantage of diffusion networks relative to conventional hidden Markov models is training speed. The diffusion networks used here were approximately 10 times slower to train than HMMs. Fortunately the Monte Carlo approximations employed herein, which represent the bulk of the computational burden, lend themselves to parallel and hardware implementations. Moreover, once a network is trained, the computation of the density functions needed in recognition tasks can be done in real time. We are exploring applications of diffusion networks to stochastic filtering problems (e.g., contour tracking) and sequence generation problems, not just sequence recognition problems. Our work shows that diffusion networks may be a feasible alternative to HMMs for problems in which state continuity is advantageous. The results obtained for the visual speech recognition task are encouraging, and reinforce the possibility that diffusion networks may become a versatile tool for a very wide variety of continuous signal processing tasks. References Ackley, D. H., Hinton, G. E., & Sejnowski, T. (1985). A Learning Algorithm for Boltzmann Machines. Cognitive Science, 9(2), 147- 169. Fishman, G. S. (1996). Monte Carlo Sampling: Concepts Algorithms and Applications. New York: Sprienger-Verlag. Karatzas, I. & Shreve, S. (1991). Brownian Motion and Stochastic Calculus. Springer. Luettin, J. (1997). Visual Speech and Speaker Recognition. PhD thesis, University of Sheffield. Movellan, J. (1995). Visual Speech Recognition with Stochastic Neural Networks. In G. Tesauro, D. Touretzky, & T. Leen (Eds.), Advances in Neural Information Processing Systems, volume 7. MIT Press. Oksendal, B. (1992). Stochastic Differential Equations. Berlin: Springer Verlag.
|
2000
|
137
|
1,795
|
Combining ICA and top-down attention for robust speech recognition Un-Min Bae and Soo-Young Lee Department of Electrical Engineering and Computer Science and Brain Science Research Center Korea Advanced Institute of Science and Technology 373-1 Kusong-dong, Yusong-gu, Taejon, 305-701, Korea bum@neuron.kaist.ac.kr, sylee@ee.kaist.ac.kr Abstract We present an algorithm which compensates for the mismatches between characteristics of real-world problems and assumptions of independent component analysis algorithm. To provide additional information to the ICA network, we incorporate top-down selective attention. An MLP classifier is added to the separated signal channel and the error of the classifier is backpropagated to the ICA network. This backpropagation process results in estimation of expected ICA output signal for the top-down attention. Then, the unmixing matrix is retrained according to a new cost function representing the backpropagated error as well as independence. It modifies the density of recovered signals to the density appropriate for classification. For noisy speech signal recorded in real environments, the algorithm improved the recognition performance and showed robustness against parametric changes. 1 Introduction Independent Component Analysis (ICA) is a method for blind signal separation. ICA linearly transforms data to be statistically as independent from each other as possible [1,2,5]. ICA depends on several assumptions such as linear mixing and source independence which may not be satisfied in many real-world applications. In order to apply ICA to most real-world problems, it is necessary either to release of all assumptions or to compensate for the mismatches with another method. In this paper, we present a complementary approach to compensate for the mismatches. The top-down selective attention from a classifier to the ICA network provides additional information of the signal-mixing environment. A new cost function is defined to retrain the unmixing matrix of the ICA network considering the propagated information. Under a stationary mixing environment, the averaged adaptation by iterative feedback operations can adjust the feature space to be more helpful to classification performance. This process can be regarded as a selective attention model in which input patterns are adapted according to top-down information. The proposed algorithm was applied to noisy speech recognition in real environments and showed the effectiveness of the feedback operations. 2 The proposed algorithm 2.1 Feedback operations based on selective attention As previously mentioned, ICA supposes several assumptions. For example, one assumption is a linearly mixing condition, but in general, there is inevitable nonlinearity of microphones to record input signals. Such mismatches between the assumptions of ICA and real mixing conditions cause unsuccessful separation of sources. To overcome this problem, a method to supply valuable information to the rcA network was proposed. In the learning phase of ICA, the unmixing matrix is subject to the signal-mixing matrix, not the input patterns. Under stationary mixing environment where the mixing matrix is fixed, iteratively providing additional information of the mixing matrix can contribute to improving blind signal separation performance. The algorithm performs feedback operations from a classifier to the ICA network in the test phase, which adapts the unmixing matrices of ICA according to a newly defined measure considering both independence and classification error. This can result in adaptation of input space of the classifier and so improve recognition performance. This process is inspired from the selective attention model [9,10] which calculates expected input signals according to top-down information. In the test phase, as shown in Figure 1, ICA separates signal and noise, and Melfrequency cepstral coefficients (MFCCs) extracted as a feature vector are delivered to a classifier, multi-layer perceptron (MLP). After classification, the error function of the classifier is defined as 1~ 2 E m1p = 2" L...,.(tmIP,i - Ymlp,i) , i (1) where tmlp,i is target value of the output neuron Ymlp,i. In general, the target values are not known and should be determined from the outputs Ymlp. Only the target value of the highest output is set to 1, and the others are set to -1 when the nonlinear function of the classifier is the bipolar sigmoid function. The algorithm performs gradient-descent calculation by error backpropagation. To reduce the error, it computes the required changes of the input values of the classifier and finally those of the unmixed signals of the ICA network. Then, the leaning rule of the ICA algorithm should be changed considering these variations. The newly defined cost function of the ICA network includes the error backpropagated term as well as the joint entropy H (Yica) of the outputs Yica. 1 H Eica = -H(Yica) + 'Y. 2" (Utarget - u)(Utarget - u) 1 H -H(Yica) + 'Y. 2"~u~u , (2) where u are the estimate recovered sources and 'Y is a coefficient which represents the relative importance of two terms. The learning rule derived using gradient descent on the cost function in Eq.(2) is ~w ex: [I - <p(u)uH]W + 'Y. x~u, (3) where x are the input signals of the rcA network. The first term in Eq.(3) is the learning rule of ICA which is applicable to complex-valued data in the frequency Input Speech Block into Frames Hamming Window Fourier Transform ICA Retraining ICA u ~u Linear-to-Mel Freq. Mel-to-Linear Freq. Filter Bank Conversion Filter Bank Conversion Frame Normalization Frame Re-normalization MLP Classification MLP Backpropagation Figure 1: Real-world speech recognition with feedback operations from a classifier to the ICA network domain [8,11]. In real environments where substantial time delays occur, the observed input signals are convolved mixtures of sources, not linear mixtures and the mixing model no longer is a matrix. In this case, blind signal separation using ICA can be achieved in the frequency domain. The complex score function is 'P(z) = tanh(Re{z}) + j. tanh(Im{z}). (4) The procedure in the test phase is summarized as follows. 1. For a test input, perform the forward operation and classify the pattern. 2. Define the error function of the classifier in Eq. (1) and perform error backpropagation to find the required changes of unmixed signals of ICA. 3. Define the cost function of the ICA network in Eq.(2) and update unmixing matrix with the learning rule in Eq.(3). Then, go to step 1. The newly defined error function ofthe classifier in Eq.(l) does not cause overfitting problems because it is used for updating the unmixing matrix of ICA only once. If classification performance is good, the averaged changes of the unmixing matrix over the total input patterns can contribute to improving recognition performance. 2.2 Considering the assumptions of ICA The assumptions of ICA [3,4,5] are summarized as follows. Figure 2: a nonlinear mixing model due to the distortions of microphones 1. The sources are linearly mixed. 2. The sources are mutually independent. 3. At most, one source is normally distributed. 4. The number of sensors is equal to or greater than the number of sources. 5. No sensor noise or only low additive noise signals are permitted. The assumptions 4 and 5 can be released if there are enough sensors. The assumption 3 is also negligible because the source distribution is usually approximated as super-Gaussian or Laplacian distributions in the speech recognition problem. As to speech recognition in real mixing environments, the nonlinearity of microphones is an inevitable problem. Figure 2 shows a nonlinear mixing model, the nonlinear functions g(.) and h(·) denote the distortions of microphones. s are original sources, x are observed signals, and u are the estimates of the recovered sources. If the sources 81 and 82 are mutually independent, the random variables 8r and 82 are still independent each other, and so are Voo and VlO. The density of Zl = VOO+VlO equals the convolution of the densities of Voo and VlO [7]. = f Pvoo(Zl - VlO)PVIO(VlO)dvlO, P(Zl) = h~ . (5) After all, the observed signal Xl is not a linear mixture of two independent components due to the nonlinear distortion h(·). The assumption of source independence is violated. In this situation, it is hard to expect what would be the leA solution and to assert the solution is reliable. Even if Xl has two independent components, which is the case of linear distortion of microphones, there is a conflict between independence and source density approximation because the densities of independent components of observed signals are different from those of original sources by g(.) and h(·), and may be far from the density approximated by f(·). The proposed algorithm can be a solution to this problem. In the training phase, a classifier learns noiseless data and the density of Xl used for the learning is p(81) p(xd = aoo h~g~ . (6) The second backpropagated term in the cost function Eq.(2) changes the unmixing matrix W to adapt the density of unmixed signals to the density that the classifier Table 1: The recognition rates of noisy speech recorded with F-16 fighter noise (%) Training data Test data SNR lJlean 15dB lOdB 5dB lJlean 15dB 10dB 5dB MLP 99.9 93.3 73.5 42.8 96.1 84.8 63.0 36.7 leA 99.7 97.0 91.9 78.7 93.9 90.6 85.6 68.9 The proposed algorithm 99.9 99.3 94.5 80.6 96.1 93.5 86.3 71.1 learned. This can be a clue to what should be the leA solution. Iterative operations over the total data induce that the averaged change of the unmixing matrix becomes roughly a function of the nonlinearity g(.) and h(·), not a certain density P(Sl) subject to every pattern. 3 Noisy Speech Recognition in Real Environments The proposed algorithm was applied to isolated-word speech recognition. The input data are convolved mixtures of speech and noise recorded in real environments. The speech data set consists of 75 Korean words of 48 speakers, and F-16 fighter noise and speech babbling noise were used as noise sources. Each leA network has two inputs and two outputs for the signal and noise sources. Tables 1 and 2 show the recognition results for the three methods: MLP only, MLP with standard leA, and the proposed algorithm. 'Training data' mean the data used for learning of the classifier, and 'Test data' are the rest. leA improves classification performance compared to MLP only in the heavy-noise cases, but in the cases of clean data, leA does not contribute to recognition and the recognition rates are lower than those of MLP only. The proposed algorithm shows better recognition performance than standard leA for both training and test data. Especially, for the clean data, the proposed algorithm improves the recognition rates to be the same as those of MLP only in most cases. The algorithm reduces the false recognition rates by about 30% to 80% in comparison with standard leA when signal to noise ratios (SNRs) are 15dB or higher. With such low noise, the classification performance of MLP is relatively reliable, and MLP can provide the leA network for helpful information. However, with heavy noise, the recognition rates of MLP sharply decrease, and the error backpropagation can hardly provide valuable information to the leA network. The overall improvement for the training data is higher than that for the test data. This is because the the recognition performance of MLP is better for the training data. As shown in Figure 3, iterative feedback operations decrease the false recognition rates, and the variation of the unknown parameter '"Y in Eq.(2) doesn't affect the final recognition performance. The variation of the learning rate for updating the unmixing matrix also doesn't affect the final performance, and it only influences on the converging time to reach the final recognition rates. The learning rate was fixed regardless of SNR in all of the experiments. 4 Discussion The proposed algorithm is an approach to complement leA by providing additional information based on top-down selective attention with a pre-trained MLP classifier. The error backpropagation operations adapt the density of recovered signals Table 2: The recognition rates of noisy speech recorded with speech babbling noise (%) Training data Test data SNR lJlean 15dtl lOdtl 5dtl lJlean 15dtl 10dtl 5dtl MLP 99.7 88.6 61.5 32.6 96.8 82.9 64.5 38.5 ICA 98.5 95.2 91.9 76.5 91.7 88.6 85.1 73.2 The proposed algorithm 99.7 97.7 92.5 76.7 97.2 93.1 87.4 73.4 according to the new cost function of ICA. This can help ICA find the solution proper for classification under the nonlinear and independence violations, but this needs the stationary condition. For nonstationary environments, a mixture model like the ICA mixture model [6] can be considered. The ICA mixture model can assign class membership to each environment category and separate independent sources in each class. To completely settle the nonlinearity problem in real environment, it is necessary to introduce a scheme which models the nonlinearity such as the distortions of microphones. Multi-layered ICA can be an approach to model nonlinearity. In the noisy recognition problem, the proposed algorithm improved recognition performance compared to ICA alone. Especially in moderate noise cases, the algorithm remarkably reduced the false recognition rates. This is due to the high classification performance of the pre-trained MLP. In the case of heavy noise the expected ICA output estimated from the top-down attention may not be accurate, and the selective attention does not help much. It is natural that we only put attention to familiar subjects. Therefore more robust classifiers may be needed for signals with heavy noise. Acknowledgments This work was supported as a Brain Science & Engineering Research Program sponsored by Korean Ministry of Science and Technology. References [1] Amari, S., Cichocki, A., and Yang, H. (1996) A new learning algorithm for blind signal separation, In Advances in Neural Information Processing Systems 8, pp. 757-763. [2] Bell, A. J. and Sejnowski, T. J. (1995) An information-maximization approach to blind separation and blind deconvolution, Neural Computation, 7:1129-1159. [3] Cardoso, J.-F. and Laheld, B. (1996) Equivariant adaptive source separation, IEEE Trans. on S.P., 45(2):434-444. [4] Comon, P. (1994) Independent component analysis - a new concept?, Signal Processing, 36(3):287-314. [5] Lee, T.-W. (1998) Independent component analysis - theory and applications, Kluwer Academic Publishers, Boston. [6] Lee, T.-W., Lewicki, M. S., and Sejnowski, T. J. (1999) ICA mixture models for unsupervised classification of non-Gaussian sources and automatic context 2.5 r-~------~---_____' ~ ~ 2 1ll a: c:: 1.5 o :;:; 'E ell 8 1 Q) a: Q) !!) 0.5 '" U. ~~~~~~~~~~~~~~~~~~~~~~~~~~~j ~ ~:~ ~ K---------------------------------------O '------..~~~~~~~~~~~--"-----' ~ ~12 1ll a: c:: 9 o :;:; 'E ell 8 6 Q) a: Q) !!) 3 '" U. o 5 10 15 Iteration of total data set (a) ~~~~~~~~~~~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~j ~ ~~:~ ~ -------~---~ - ~~~~ 5 10 15 Iteration of total data set (c) 7.5 r--------~---_____, ~ ~ 6 1ll a: c:: 4.5 o :;:; 'E ell 8 3 Q) a: Q) !!) 1.5 '" U. ~~~~~~~:::::::::~~:::~~::~~j ~ ~:~ ~ O '------~-~-~-~--------' o 5 10 15 Iteration of total data set (b) 25 ,----~---~---__, ~ ~23 1ll a: c:: 21 o '';::; 'E ell 8 19 Q) a: Q) !!) 17 '" U. : ~ ~ ~ ~ ~ : : : ~ ~ ~ ~ : : ~~~~~::::~~::: i : ;~:~ 1:- - - - - - - - - - - - - - - ---~ . ~~~ 15 '------~-~-~-~---~ o 5 10 15 Iteration of total data set (d) Figure 3: The false recognition rates by iteration of total data and the value of the 'Y parameter. (a) Clean speech; (b) SNR=15 dB; (c) SNR=lO dB; (d) SNR=5 dB switching in blind signal separation, IEEE Trans. on Pattern Analysis and Machine Intelligence, in press. [7] Papoulis, A. (1991) Probability, random variables, and stochastic processes, McGraw-Hill, Inc. [8] Park, H.-M., Jung, H.-Y., Lee, T.-W., and Lee, S.-Y. (1999) Subbandbased blind signal separation for noisy speech recognition, Electronics Letters, 35(23) :2011-2012. [9] Park, K.-Y. and Lee, S.-Y. (1999) Selective attention for robust speech recognition in noisy environments, In Proc. of IJCNN, paper no. 829. [10] Park, K.-Y. and Lee, S.-Y. (2000) Out-of-vocabulary rejection based on selective attention model, Neural Processing Letters, 12:41-48. [11] Smaragdis, P. (1997) Information theoretic approaches to source separation, Masters Thesis, MIT Media Arts and Science Dept.
|
2000
|
138
|
1,796
|
A comparison of Image Processing Techniques for Visual Speech Recognition Applications Michael S. Gray Computational Neurobiology Laboratory The Salk Institute San Diego, CA 92186-5800 Terrence J. Sejnowski Computational Neurobiology Laboratory The Salk Institute San Diego, CA 92186-5800 Javier R. Movellan* Department of Cognitive Science Institute for Neural Computation University of California San Diego Abstract We examine eight different techniques for developing visual representations in machine vision tasks. In particular we compare different versions of principal component and independent component analysis in combination with stepwise regression methods for variable selection. We found that local methods, based on the statistics of image patches, consistently outperformed global methods based on the statistics of entire images. This result is consistent with previous work on emotion and facial expression recognition. In addition, the use of a stepwise regression technique for selecting variables and regions of interest substantially boosted performance. 1 Introduction We study the performance of eight different methods for developing image representations based on the statistical properties of the images at hand. These methods are compared on their performance on a visual speech recognition task. While the representations developed are specific to visual speech recognition, the methods themselves are general purpose and applicable to other tasks. Our focus is on low-level data-driven methods based on the statistical properties of relatively untouched images, as opposed to approaches that work with contours or highly processed versions of the image. Padgett [8] and Bartlett [1] systematically studied statistical methods for developing representations on expression recognition tasks. They found that local wavelet-like representations consistently outperformed global representations, like eigenfaces. In this paper we also compare local versus global representations. The main differences between our work and that in [8] and [1] * To whom correspondence should be addressed. Figure 1: The normalization procedure. In each panel, the "+" indicates the center of the lips, and the "0" indicates the center of the image. The location of the lips was automatically determined using Luettin et al. point density model for lip tracking: (1) Original image; (2) The center of the lips was translated to the center ofthe image; (3) The image was rotated in the plane to horizontal; (4) The lips were scaled to a constant reference width; (5) The image was symmetrized relative to the vertical midline; (6) The intensity was normalized using a logistic gain control procedure. are: (1) We use image sequences while they used static images; (2) Our work involves images of the mouth region while their work involves images of the entire face; (3) Our recognition engine is a bank of hidden Markov model while theirs is a backpropagation network [8] and a nearest neighbor classifier [1]. In addition to the comparison of local and global representations, we propose an unsupervised method for automatically selecting regions and variables of interest. 2 Preprocessing and Recognition Engine The task was recognition of the words "one", "two", "three" and "four" from the Tulips1 [7] database. The database consists on movies of 12 subjects each uttering the digits in English twice. While the number of words is limited, the database is challenging due to differences in illumination conditions, ethnicity and gender of the subjects. Image preprocessing consisted of the following steps: First the contour of the outer lips were tracked using point distribution models, a data-driven technique based on analysis ofthe gray-level statistics around lip contours [5]. The lip images were then normalized for translation and rotation. This was accomplished by first padding the image on all sides with 25 rows or columns of zeros, and modulating the images in the spatial frequency domain. The images were symmetrized with respect to the vertical axis going through the center of the lips. This makes the final representation more robust to horizontal changes in illumination. The images were cropped to 65 pixels vertically x 87 pixels horizontally (see Figure 1) and their intensity was normalized using logistic gain control [7]. Eight different techniques were used on the normalized database each of which developed a different image basis. For each of these techniques the following steps were followed: (1) Projection: For each image in the database we compute the coordinates x(t) of the image with respect to the image bases developed using each of the eight techniques; (2) Tempoml differentiation: For each time step we compute the vectors 8(t) = x(t) - x(t - 1), where x(t) represents the coordinate vector of image presented at time t; (3) Gain control: Each component of x(t) and 8(t) is independently scaled using a logistic gain control function matched to the mean and variance of each component across an entire movie [7]. This results in a form of soft histogram equalization; (4) Global PCA PCA Spectrum Global TCA lCA Spectrum Figure 2: Global decompositions for the normalized image dataset. Row 1: Global kernels of principal component analysis ordered with first eigenimage on left. Row 2: Log magnitude spectrum of eigenimages. Row 3: Global pixel space independent component kernels ordered according to projected variance. Row 4: Log magnitude spectrum of global independent components. Recognition: The scaled x(t) and 8(t) coefficients are fed to the HMM recognition engine. 3 Global Methods We first evaluated the performance of techniques based on the statistics of the entire lip images as opposed to portions of it. This global approach has been shown to provide good performance on face recognition [9], expression recognition [2], and gender recognition tasks [4]. In particular we compared the performance of principal component analysis (PCA) and two different versions of independent component analysis (ICA). 3.1 Global PC A: We tried image bases that consisted of the first 50, 100 and 150 eigenvectors of the pixelwise covariance matrix. Best results were obtained with the first 50 principal components (which accounted for 94.6% of the variance) and are the only ones reported here. The top row of Figure 2 shows the first 5 eigenvectors displayed as images, their magnitude spectrum is shown in the second row. These eigenimages have most of their energy localized in low and horizontal spatial frequencies and are typically non-local in the spatial domain (i.e., have non-zero energy distributed over the whole image). 3.2 Global ICA: The goal of lnfomax ICA is to transform an input random vector such that the entropy of the output vector is maximized [3]. The main differences between ICA and PCA are: (1) ICA maximizes the joint entropy of the outputs, while PCA maximizes the sum of their variance; (2) PCA provides orthogonal basis vectors, while rcA basis vectors need not be orthogonal; (3) PCA outputs are always uncorrelated, but may not be statistically independent. ICA attempts to extract independent outputs, not just uncorrelated. We tried two different ICA approaches: ICA I: This method results in a non-orthogonal transformation of the bases developed via PCA. While such transformations do not change the underlying space of Figure 3: Upper left: Lip patches (12 pixels x 12 pixels) from randomly chosen locations used to develop local PCA and local lCA kernels. Lower left: Four orthogonal images generated from a single local PCA kernel. Right: Top 10 Local PCA and lCA kernels ordered according to projected variance (highest at top left). Note how the lCA vectors tend to be more local and consistent with the receptive fields found in VI. the representation they may facilitate the job of the recognition engine by decreasing the statistical dependency amongst the coordinates. First each image in the database was projected onto the space spanned by the first 50 eigenvectors of the pixelwise covariance matrix. Then lCA was performed on the 50 PCA coordinate variables to obtain a new 50-dimensional non-orthogonal basis. lCA II: A different approach to lCA was explored in [1] for face recognition tasks and by [6] for fMRI images. While in lCA-l the goal is to develop independent image coordinates, in rcA-II the goal is for the image bases themselves to be independent. Here independence of images is defined with respect to a probability space in which pixels are seen as outcomes and images as random vectors of such outcomes. The approach, which is described in detail in [6], resulted in a set of 50 images which were a non-orthogonal linear transformation of the first 50 eigenvectors of the pixelwise covariance matrix. The first 5 images (accounting for the largest amounts of projected variance) obtained via this approach to lCA are shown in the third row of Figure 2. The fourth row shows their magnitude spectrum. As reported in [1] the images obtained using this method are more local than those obtained via PCA. 4 Local Methods Padgett et al. [8] reported surprisingly good results on an emotion recognition tasks using PCA on random patches of the face instead of the entire face. Recent theoretical work also places emphasis on spatially localized, wavelet-like image bases. One potential advantage of spatially localized image bases is that they provide explicit information about where things are happening, not just about what is happening. This facilitates the work of recognition engines on some tasks but the theoretical reasons for this are unclear at this point. Local PCA and lCA kernels were developed based on a database of 18680 small patches (12 pixel x 12 pixel) chosen from random locations in the Tulip1s database. A sample of these random patches (superimposed on a lip image) is shown in the top panel of Figure 3. Hereafter we refer to the 12 pixel x 12 pixel images obtained PCAKemeil PCAKemel2 2 20 " n " " , .. ICAKemeil ICAKemel9 ,..LLL " " , ~ " " " , '" 41 10 Figure 4: Kernel-location combinations chosen using unblocked variable selection. Top of each quadrant: Local rcA or peA kernel. Bottom of each quadrant: Lip image convolved with corresponding local kernel, then downsampled. The numbers on the lip image indicate the order in which variables were chosen for the multiple regression procedure. There are no numbers on the right side of the lip images because only half of each lip image was used for the representation (since the images are symmetrized). via peA or leA as "kernels". Image bases were generated by centering a local peA or leA kernel onto different locations and padding the rest of the matrix with zeros, as displayed in Figure 3 (lower left panel). This results on bases images which are local in space (the energy is localized about a single patch) and shifted versions of each other. The process of obtaining image coordinates can be seen as a filtering operation followed by subsampling: First the images are filtered using a bank of filters whose impulse response are the kernels obtained via peA (or leA). The relevant coordinates are obtained by subsampling at 300 uniformly distributed locations (15 locations vertically by 20 locations horizontally). We explored four different filtering approaches: (1) Single linear shift invariant filter (LSI); (2) Single linear shift variant filter (LSV); (3) Bank of LSI filters with blocked selection; (4) Bank of LSI filters combined with unblocked selection. For the single-filter LSI approach, the images were convolved with a single local leA kernel or a local peA kernel. The top 5 local peA and leA kernels were each tested separately and the results obtained with the best of the 5 kernels were reported. For the single LSV-filtering approach different local peA kernels were derived for a total of 117 non-overlapping regions each of which occupied 5 x 5 pixels. Each region of the 934 images was projected onto the first principal component corresponding to that location. This effectively resulted in an LSV filtering operation. 4.1 Automatic Selection of Focal Points Padgett's [8] most successful method was based on outputs of local filters at manually selected focal regions. Their task was emotion recognition and the focal regions were the eyes and mouth. In visual speech recognition once the lips are chosen it Image Processing Performance ± s.e.m. Global peA 79.2 ± 4.7 Global Methods Global Il;A I 61.5 ± 4.5 Ulobal ICA II 74.0 ± 5.4 Single-Filter LSI peA 90.6 ± 3.1 Single-Filter LSI ICA 89.6 ± 3.0 Local Methods Blocked Filter Bank PeA 85.4 ± 3.7 Blocked Filter Bank leA 85.4 ± 3.0 Unblocked Filter Bank peA 91.7 ± 2.8 Unblocked Filter Bank Il;A 91.7 ± 3.2 Table 1: Best generalization performance (% correct) ± standard error of the mean for all image representations. is unclear which regions would be most informative. Thus we developed a method for automatic selection of focal regions. First 10 filters were developed via local leA (or peA). Each image was filtered using the 10-filter bank and the outputs were subsampled at 150 locations for a 1500 dimensional representation (10 filters x 150 locations) of each of the images in the dataset. Regions and variables of interest were then selected using a stepwise forward multiple regression procedure. First we choose the variable that, when averaging across the entire database, best reconstructed the original images. Here best reconstruction is defined in terms of least squares using a multiple regression model. Once a variable is selected, it is "tenured" and we search for the variable which in combination with the tenured ones best reconstructs the image database. The procedure is stopped when the number of tenured variables reaches a criterion point. We compared performance using 50, 100, and 150 tenured variables and report results with the best of those three numbers. We tested two different selection procedures, one blocked by location and one in which location was not blocked. In the first method the selection was done in blocks of 10 variables where each block contained the outputs of all the filters at a specific location. If a location was chosen, the outputs of the 10 filters in that location were automatically included in the final image representation. In the second method selection of variables was not blocked by location. Figure 4 shows, for 2 local peA and 2 local leA kernels, the first 10 variables chosen for each particular kernel using the forward selection multiple regression procedure. The numbers on the lip images in this figure indicate the order in which particular kernel/location variables were chosen using the sequential regression procedure: "I" indicates the first variable chosen, "2" the second, etc. 5 Results and Conclusions Table 1 shows the best generalization performance (out of the 9 HMM architectures tested) for each of the eight image representation methods. The local decompositions significantly outperformed the global ones (t(106) = 4.10, p < 0.001). The improved performance of local representations is consistent with current ideas on the importance of localized wavelet-like representations. However, it is unclear why local decompositions work better. One possibility is that these results apply only to this particular recognition engine and the problem at hand (i.e., hidden Markov models for speechreading). Yet similar results with local representations were reported in [8] on an emotion classification task with a 3 layer backpropagation network and in [1] on an expression classification tasks with a nearest neighbor classifier. Another possible explanation for the advantage of local representations is that global unsupervised decompositions emphasize subject identity while local decompositions tend to hide it. We found some evidence consistent with this idea by testing global and local representations on a subject identification task (i.e., recognizing which person the lip images belong to). For this task the global representations outperformed the local ones. However this result is inconsistent with [8] which found local representations were better on emotion classification and on subject identification tasks. Another possibility is that local representations make more explicit information about where things are happening, not just what is happening, and such information turns out to be important for the task at hand. The image representations obtained using the bank of filter methods with unblocked selection yielded the best results. The stepwise regression technique used to select kernels and regions of interest led to substantial gains in recognition performance. In fact the highest generalization performance reported here (91. 7% with the bank of filters using unblocked variable selection) surpassed the best published performance on this dataset [5]. References [1] M.S. Bartlett. Face Image Analysis by Unsupervised Learning and Redundancy Reduction. PhD thesis, University of California, San Diego, 1998. [2] M.S. Bartlett, P.A. Viola, T.J. Sejnowski, J. Larsen, J. Hager, and P. Ekman. Classifying facial action. In D. Touretski, M. Mozer, and M. Hasselmo, editors, Advances in Neural Information Processing Systems, volume 8, pages 823-829. Morgan Kaufmann, San Mateo, CA, 1996. [3] A.J. Bell and T.J. Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neural Computation, 7(6):1129-1159,1995. [4] G. Cottrell and J. 1991 Metcalfe. Face, gender and emotion recognition using holons. In D. Touretzky, editor, Advances in Neural Information Processing Systems, volume 3, pages 564- 571, San Mateo, CA, 1991. Morgan Kaufmann. [5] Juergen Luettin. Visual Speech and Speaker Recognition. PhD thesis, University of Sheffield, 1997. [6] M.J. McKeown, S. Makeig, G.G. Brown, T-P. Jung, S.S. Kindermann, A.J. Bell, and T.J. Sejnowski. Analysis of fmri data by decomposition into independent components. Proc. Nat. Acad. Sci., in press. [7] J .R. Movellan. Visual speech recognition with stochastic networks. In G. Tesauro, D.S. Touretzky, and T. Leen, editors, Advances in Neural Information Processing Systems, volume 7, pages 851- 858. MIT Press, Cambridge, MA,1995. [8] C. Padgett and G. Cottrell. Representing face images for emotion classification. In M. Mozer, M. Jordan, and T. Petsche, editors, Advances in Neural Information Processing Systems, volume 9, Cambridge, MA, 1997. MIT Press. [9] M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1):71- 86, 1991.
|
2000
|
139
|
1,797
|
Programmable Reinforcement Learning Agents David Andre and Stuart J. Russell Computer Science Division, UC Berkeley, CA 94702 { dandre,russell}@cs.berkeley.edu Abstract We present an expressive agent design language for reinforcement learning that allows the user to constrain the policies considered by the learning process.The language includes standard features such as parameterized subroutines, temporary interrupts, aborts, and memory variables, but also allows for unspecified choices in the agent program. For learning that which isn't specified, we present provably convergent learning algorithms. We demonstrate by example that agent programs written in the language are concise as well as modular. This facilitates state abstraction and the transferability of learned skills. 1 Introduction The field of reinforcement learning has recently adopted the idea that the application of prior knowledge may allow much faster learning and may indeed be essential if realworld environments are to be addressed. For learning behaviors, the most obvious form of prior knowledge provides a partial description of desired behaviors. Several languages for partial descriptions have been proposed, including Hierarchical Abstract Machines (HAMs) [8], semi-Markov options [12], and the MAXQ framework [4]. This paper describes extensions to the HAM language that substantially increase its expressive power, using constructs borrowed from programming languages. Obviously, increasing expressiveness makes it easier for the user to supply whatever prior knowledge is available, and to do so more concisely. (Consider, for example, the difference between wiring up Boolean circuits and writing Java programs.) More importantly, the availability of an expressive language allows the agent to learn and generalize behavioral abstractions that would be far more difficult to learn in a less expressive language. For example, the ability to specify parameterized behaviors allows multiple behaviors such as WalkEast, W alkN arth, Walk West, WalkS outh to be combined into a single behavior W alk( d) where d is a direction parameter. Furthermore, if a behavior is appropriately parameterized, decisions within the behavior can be made independently ofthe "calling context" (the hierarchy of tasks within which the behavior is being executed). This is crucial in allowing behaviors to be learned and reused as general skills. Our extended language includes parameters, interrupts, aborts (i.e., interrupts without resumption), and local state variables. Interrupts and aborts in particular are very important in physical behaviors-more so than in computation-and are crucial in allowing for modularity in behavioral descriptions. These features are all common in robot programming languages [2, 3, 5]; the key element of our approach is that behaviors need only be partially described; reinforcement learning does the rest. To tie our extended language to existing reinforcement learning algorithms, we utilize Parr and Russell's [8] notion of the joint semi-Markov decision process (SMDP) created when a HAM is composed with an environment (modeled as an MDP). The joint SMDP state space consists of the cross-product of the machine states in the HAM and the states in the original MDP; the dynamics are created by the application of the HAM in the MDP. Parr and Russell showed that an optimal solution to the joint SMDP is both learnable and yields an optimal solution to the original MDP in the class o/policies expressed by the HAM (socalled hierarchical optimality). Furthermore, Parr and Russell show that the joint SMDP can be reduced to an equivalent SMDP with a state space consisting only of the states where the HAM does not specify an action, which reduces the complexity of the SMDP problem that must be solved. We show that these results hold for our extended language of Programmable HAMs (PHAMs). To demonstrate the usefulness of the new language, we show a small, complete program for a complex environment that would require a much larger program in previous formalisms. We also show experimental results verifying the convergence of the learning process for our language. 2 Background An MDP is a 4-tuple, (S, A, 'T, R), where S is a set of states, A is a set of actions, 'T is a probabilistic transition function mapping S x A x S -+ [0,1], and R is a reward function mapping S x A x S to the reals. In this paper, we focus on infinite-horizon MDPs with a discount factor /3. A solution to a MDP is an optimal policy 7['* that maps from S -+ A and achieves maximum expected discounted reward for the agent. An SMDP (semi-Markov decision process) allows for actions that take more than one time step. 'T is modified to be a mapping from S, A, S, N -+ [0, 11, where N is the natural numbers; i.e., it specifies a distribution over both output states and action durations. R is then a mapping from S, A, S, N to the reals. The discount factor, /3, is generalized to be a function, /3(s, a), that represents the expected discount factor when action a is taken in state s. Our definitions follow those common in the literature [9, 6,4]. The HAM language [8] provides for partial specification of agent programs. A HAM program consists of a set of partially specified Moore machines. Transitions in each machine may depend stochastically on (features of) the environment state, and the outputs of each machine are primitive actions or nonrecursive invocations of other machines. The states in each machine can be of four types: {start, stop, action, choice}. Each machine has a single distinguished start state and may have one or more distinguished stop states. When a machine is invoked, control starts at the start state; stop states return control back to the calling machine. An action state executes an action. A call state invokes another machine as a subroutine. A choice state may have several possible next states; after learning, the choice is reduced to a single next state. 3 Programmable HAMs Consider the problem of creating a HAM program for the Deliver-Patrol domain presented in Figure 1, which has 38,400 states. In addition to delivering mail and picking up occasional additional rewards while patrolling (both of which require efficient navigation and safe maneuvering), the robot must keep its battery charged (lest it be stranded) and its camera lens clean (lest it crash). It must also decide whether to move quickly (incurring collision risk) or slowly (delaying reward), depending on circumstances. Because all the 5 x 5 "rooms" are similar, one can write a "traverse the room" HAM routine that works in all rooms, but a different routine is needed for each direction (north-south, south-north, east-west, etc.). Such redundancy suggests the need for a "traverse the room" routine that is parameterized by the desired direction. Consider also the fact that the robot should clean its camera lens whenever it gets dirty. 0 0 RootO AI 0 0 Is a=e---~ water ean 0 00 00 00 0 0 00 .!.. 00 00 0 ~-___ .~ail I 0 I 0 I 0 0 I 0 00 00 00 0 WorkO <DoDelivery> 0 00 00 00 0 I 0 I 0 I 0 0 I 0 00 00 M 00 0 0 00 00 00 0 cl 0 I 0 I 0 0 ID 0 0 (a) Figure 1: (a) The Deliver- Patrol world. Mail appears at M and must be delivered to the appropriate location. Additional rewards appear sporadically at A, B , C, and D. The robot's battery may be recharged at R. The robot is penalized for colliding with walls and "furniture" (small circles). (b) Three of the PHAMs in the partial specification for the Deliver- Patrol world. Right-facing half-circles are start states, left-facing half-circles are stop states, hexagons are call states, ovals are primitive actions, and squares are choice points. zl and z2 are memory variables. When arguments to call states are in braces, then the choice is over the arguments to pass to the subroutine. The RootO PHAM specifies an interrupt to clean the camera lens whenever it gets dirty; the WorkO PHAM interrupts its patrolling whenever there is mail to be delivered. ~-~ (a) t I H e Hit 19 : - : gl L-____ ~==~ ________________ __" ToDoor( dest,sp) (b) Figure 2: (a) A room in the Deliver-Patrol domain. The arrows in the drawing of the room indicate the behavior specified by the pO transition function in ToDoor(dest,sp). Two arrows indicate a "fast" move (jN,jS,jE.jW), whereas a single arrow indicates a slow move (N, S, E, W). (b) The ToDoor(dest,sp) and Move(dir) PHAMs. Nav( dest,sp) - InRoom( dest) Figure 3: The remainder of the PHAMs for the Deliver- Patrol domain. Nav(dest,sp) leaves route choices to be learned through experience. Similarly, PatrolO does not specify the sequence of locations to check. In the HAM language, this conditional action must be inserted after every state in every HAM. An interrupt mechanism with appropriate scoping would obviate the need for such widespread mutilation. The PHAM language has these additional characteristics. We provide here an informal summary of the language features that enable concise agent programs to be written. The 9 PHAMs for the Deliver-Patrol domain are presented in Figure l(b), Figure 2(b), and Figure 3. The corresponding HAM program requires 63 machines, many of which have significantly more states than their PHAM counterparts. The PHAM language adds several structured programming constructs to the HAM language. To enable this, we introduce two additional types of states in the PHAM: internal states, which execute an internal computational action (such as setting memory variables to a function of the current state), and null states, which have no direct effect and are used for computational convenience. Parameterization is key for expressing concise agent specifications, as can be seen in the Deliver-Patrol task. Subroutines take a number of parameters, (h,fh, ... Ok, the values of which must be filled in by the calling subroutine (and can depend on any function of the machine, parameter, memory, and environment state). In Figure 2(b), the subroutine Move(dir) is shown. The dir parameter is supplied by the NavRoom subroutine. The ToDoor( dest,speed) subroutine is for navigating a single room of the agent's building. The pO is a transition function that stores a parameterized policy for getting to each door. The policy for (N, J) (representing the North door, going fast) is shown in Figure 2(a). Note that by using parameters, the control for navigating a room is quite modular, and is written once, instead of once for each direction and speed. Aborts and interrupts allow for modular agent specification. As well as the camera-lens interrupt described earlier, the robot needs to abort its current activity if the battery is low and should interrupt its patrolling activity if mail arrives for delivery. The PHAM language allows abort conditions to be specified at the point where a subroutine is invoked within a calling routine; those conditions are in force until the subroutine exits. For each abort condition, an "abort handler" state is specified within the calling routine, to which control returns if the condition becomes true. (For interrupts, normal execution is resumed once the handler completes.) Graphically, aborts are depicted as labelled dotted lines (e.g., in the DoAll() PHAM in Figure 3), and interrupts are shown as labelled dashed lines with arrows on both ends (e.g., in the Work() PHAM in Figure l(b». Memory variables are a feature of nearly every programming language. Some previous research has been done on using memory variables in reinforcement learning in partially observable domains [10]. For an example of memory use in our language, examine the DoDelivery subroutine in Figure l(b), where Z2 is set to another memory value (set in Nav( dest,sp ). Z2 is then passed as a variable to the Nav subroutine. Computational functions such as dest in the Nav( dest,sp) subroutine are restricted to be recursive functions taking effectively zero time. A PHAM is assumed to have a finite number of memory variables, Zl, ... ,Zn, which can be combined to yield the memory state, Z. Each memory variable has finite domain D(Zi). The agent can set memory variables by using internal states, which are computational action states with actions in the following format: (set Zl 'l/J(m, 0, x, Z), where 'l/J(m, 0, x, Z) is a function taking the machine, parameter, environment, and memory state as parameters. The transition function, parameter-setting functions, and choice functions take the memory state into account as well. 4 Theoretical Results Our results mirror those obtained in [9]. In summary (see also Figure 4): The composition 1-l 0 M of a PHAM 1-l with the underlying MDP M is defined using the cross product of states in 1-l and M. This composition is in fact an SMDP. Furthermore, solutions to 1-l 0 M yield optimal policies for the original MDP, among those policies expressed by the PHAM. Finally, 1i a M may be reduced to an equivalent SMDP whose states are just the choice points, i.e., the joint states where the machine state is a choice state. See [1] for the proofs. Definition 1 (Programmable Hierarchical Abstract Machines: PHAMs) A PRAM is a tuple 1i = (IL, 9,8, p, ~, I, ILI, A, ILA, Z, \[1), where IL is the set of machine states in 1i, 9 is the .Ipace of possible parameter settings, 8 is the transition function, mapping IL x 9 x Z x X x IL to [0,1], p is a mapping from IL x 9 x Z x X x 9 to [0,1] and expresses the parameter choice function, ~ maps from IL x 9 x Z x X to subsets of IL and expresses the allowed choices at choice states, I( m) returns the interrupt condition at a call state, ILI (m) specifies the handler of an interrupt, A(m) returns the abort condition at a call state, ILA (m) specifies the handler of an abort, Z is the set of possible memory configurations, and \[1(m) is a complex function expressing which computational internal function is used at internal states, and to which memory variable the result is assigned. Theorem 1 For any MDP, M and any PRAM, 1i, the operation of1i in M induces a joint SMDp, called 1i a M. If 7r is an optimal solution for 1i a M, then the primitive actions specified by 7r constitute an optimal policy for M among those consistent with 1i. The state space of 1i a M may be enormous. As is illustrated in Figure 4, however, we can obtain significant further savings, just as in [9]. First, not all pairs of PHAM and MDP states will be reachable from the initial state; second, the complexity of the induced SMDP is solely determined by the number of reachable choice points. Theorem 2 For any MDP M and PRAM 1i, let C be the set of choice points in 1i a M. There exists an SMDP called reduce(1i a M) with states C such that the optimal policy for reduce(1i a M) corresponds to an optimal policy for M among those consistent with 1i. The reduced SMDP can be solved by offline, model-based techniques using the method given in [9] for constructing the reduced model. Alternatively, and much more simply, we can solve it using online, model-free HAMQ-Iearning [8], which learns directly in the reduced state space of choice points. Starting from a choice state w where the agent takes action a, the agent keeps track of the reward r tot and discount fJtot accumulated on the way to the next choice point, w'. On each step, the agent encounters reward ri and discount fJi (note that fJi is 0 exactly when the agent transitions only in the PHAM and not in the MDP), and updates the totals as follows: rtot ~ rtot + fJtotri; fJtot ~ fJtotfJi . The agent maintains a Q-table, Q(w, a), indexed by choice state and action. When the agent gets to the next choice state, w', it updates the Q-table as follows: Q(w, a) ~ (1 - o:)Q(w, a) + o:[rtot + fJtot max Q(w' , u)] . u We have the following theorem. Theorem 3 For a PHAM 1i and and MDP M, HAMQ-leaming will converge to an optimal policy for reduce(1i a M), with probability 1, with appropriate restrictions on the learning rate. 5 Expressiveness of the PHAM language As shown by Parr [9], the HAM language is at least as expressive as some existing action languages including options [12] and full-fJ models [11]. The PHAM language is substantially more expressive than HAMs. As mentioned earlier, the Deliver-Patrol PHAM program has 9 machines whereas the HAM program requires 63. In general, the additional number of states required to express a PHAM as a pure HAM is IV(Z) x C x 91, where V(Z) is the memory state space, C is the set of possible abort/interrupt contexts, and 9 is the total parameter space. We also developed a PHAM program for the 3,700-state maze world used by Parr and Russell [8]. The HAM used in their experiments had 37 machines; the PHAM program requires only 7. ~ Reduce(H oM) Figure 4: A schematic illustration of the formal results. (1) The top two diagrams are of a PRAM fragment with 1 choice state and 3 action states (of which one, labelled d, is the start state). The MDP has 4 states, and action d always leads to state 1 or 4. The composition, H. 0 M , is shown in (2). Note that there are no incoming arcs to the states < c, 2 > or < c,3 >. In (3), reduce(H. 0 M) is shown. There are only 2 states in the reduced SMDP because there are no incoming arcs to the states < c, 2 > or < c, 3 >. Results on Deliver/Patrol Task 150000 100000 I i.' 50000 " ~ '0 ,;xllmal PHA -easy _ 1 PHAM-hard ~ Q-Leaming~ -50000 j -100000 50 100 150 200 Num PrimitIVe Steps, In 10,OOOs Figure 5: Learning curves for the DeliverlPatrol domain, averaged over 25 runs. X-axis: number of primitive steps. Y-axis: value of the policy measured by ten 5,000 step trials. PRAM-hard refers to the PRAMs given in this paper. PRAM-easy refers to a more complete PRAM, leaving unspecified only the speed of travel for each activity. With respect to the induced choice points, the Deliver-Patrol PHAM induces 7,816 choice points in the joint SMDP, compared with 38,400 in the original MDP. Furthermore, only 15,800 Q-values must be learned, compared with 307,200 for flat Q-Iearning. Figure 5 shows empirical results for the Deliver-Patrol problem, indicating that Q-Iearning with a suitable PHAM program is far faster than flat Q-Iearning. (Parr and Russell observed similar results for the maze world, where HAMQ-Iearning finds a good policy in 270,000 iterations compared to 9,000,000 for flat Q-Iearning.) Note that equivalent HAM and PHAM programs yield identical reductions in the number of choice points and identical speedups in Q-Iearning. Thus, one might argue that PHAMs do not offer any advantage over HAMs, as they can express the same set of behaviors. However, this would be akin to arguing that the Java programming language offers nothing over Boolean circuits. Ease of expression and the ability to utilize greater modularity can greatly ease the task of coding reinforcement learning agents that take advantage of prior knowledge. An interesting feature of PHAMs was observed in the Deliver-Patrol domain. The initial PHAM program was constructed on the assumption that the agent should patrol among A, B, C, D unless there is mail to be delivered. However, the specific rewards are such that the optimal behavior is to loiter in the mail room until mail arrives, thereby avoiding costly delays in mail delivery. The PHAM-Q learning agents learned this optimal behavior by "retargeting" the N av routine to stay in the mail room rather than go to the specified destination. This example demonstrates the difference between constraining behavior through structure and constraining behavior through subgoals: the former method may give the agent greater flexibility but may yield "surprising" results. In another experiment, we constrained the PHAM further to prevent loitering. As expected, the agent learned a suboptimal policy in which N av had the intended meaning of travelling to a specified destination. This experience suggests a natural debugging cycle in which the agent designer may examine learned behaviors and adjust the PHAM program accordingly. The additional features of the PHAM language allow direct expression of programs from other formalisms that are not easily expressed using HAMs. For example, programs in Dietterich's MAXQ language [4] are written easily as PHAMs, but not as HAMs because the MAXQ language allows parameters. The language of teleo-reactive (TR) programs [7, 2] relies on a prioritized set of condition-action rules to achieve a goal. Each action can itself be another TR program. The TR architecture can be implemented directly in PHAMs using the abort mechanism [1]. 6 Future work Our long-term goal in this project is to enable true cross-task learning of skilled behavior. This requires state abstraction in order to learn choices within PHAMs that are applicable in large classes of circumstances rather than just to each invocation instance separately. Dietterich [4] has derived conditions under which state abstraction can be done within his MAXQ framework without sacrificing recursive optimality (a weaker form of optimality than hierarchical optimality). We have developed a similar set of conditions, based on a new form of value function decomposition, such that PHAM learning maintains hierarchical optimality. This decomposition critically depends on the modularity of the programs introduced by the language extensions presented in this paper. Recently, we have added recursion and complex data structures to the PHAM language, incorporating it into a standard programming language (Lisp). This provides the PHAM programmer with a very powerful set of tools for creating adaptive agents. References [1] D. Andre. Programmable HAMs. www.cs.berkeley.edwdandre/pham.ps. 2000. [2] S. Benson and N. Nilsson. Reacting, planning and learning in an autonomous agent. In K. Furukawa, D. Michie, and S. Muggleton, editors, Machine Intelligence 14. 1995. [3] G. Berry and G. Gonthier. The Esterel synchronous programming language: Design, semantics, implementation. Science oj Computer Programming, 19(2):87-152, 1992. [4] T. G. Dietterich. State abstraction in MAXQ hierarchical RL. In NIPS 12, 2000. [5] R.I. Firby. Modularity issues in reactive planning. In AlPS 96, pages 78-85. AAAI Press, 1996. [6] L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. lAIR, 4:237-285, 1996. [7] N. I. Nilsson. Teleo-reactive programs for agent control. lAIR, 1:139-158, 1994. [8] R. Parr and S. I. Russell. Reinforcement learning with hierarchies of machines. In NIPS 10, 1998. [9] R. Parr. Hierarchical Control and Learning jor MDPs. PhD thesis, UC Berkeley, 1998. [10] L. Peshkin, N. Meuleau, and L. Kaelbling. Learning policies with external memory. In ICML, 1999. [11] R. Sutton. Temporal abstraction in reinforcement learning. In ICML, 1995. [12] R. Sutton, D. Precup, and S. Singh. Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1):181- 211, February 1999.
|
2000
|
14
|
1,798
|
Learning continuous distributions: Simulations with field theoretic priors lIya Nemenman1,2 and William Bialek2 1 Department of Physics, Princeton University, Princeton, New Jersey 08544 2NEC Research Institute, 4 Independence Way, Princeton, New Jersey 08540 nemenman@research.nj.nec.com, bialek@research.nj.nec.com Abstract Learning of a smooth but nonparametric probability density can be regularized using methods of Quantum Field Theory. We implement a field theoretic prior numerically, test its efficacy, and show that the free parameter of the theory (,smoothness scale') can be determined self consistently by the data; this forms an infinite dimensional generalization of the MDL principle. Finally, we study the implications of one's choice of the prior and the parameterization and conclude that the smoothness scale determination makes density estimation very weakly sensitive to the choice of the prior, and that even wrong choices can be advantageous for small data sets. One of the central problems in learning is to balance 'goodness of fit' criteria against the complexity of models. An important development in the Bayesian approach was thus the realization that there does not need to be any extra penalty for model complexity: if we compute the total probability that data are generated by a model, there is a factor from the volume in parameter space-the 'Occam factor' -that discriminates against models with more parameters [1, 2]. This works remarkably welJ for systems with a finite number of parameters and creates a complexity 'razor' (after 'Occam's razor') that is almost equivalent to the celebrated Minimal Description Length (MDL) principle [3]. In addition, if the a priori distributions involved are strictly Gaussian, the ideas have also been proven to apply to some infinite-dimensional (nonparametric) problems [4]. It is not clear, however, what happens if we leave the finite dimensional setting to consider nonparametric problems which are not Gaussian, such as the estimation of a smooth probability density. A possible route to progress on the nonparametric problem was opened by noticing [5] that a Bayesian prior for density estimation is equivalent to a quantum field theory (QFT). In particular, there are field theoretic methods for computing the infinite dimensional analog of the Occam factor, at least asymptotically for large numbers of examples. These observations have led to a number of papers [6, 7, 8, 9] exploring alternative formulations and their implications for the speed of learning. Here we return to the original formulation of Ref. [5] and use numerical methods to address some of the questions left open by the analytic work [10]: What is the result of balancing the infinite dimensional Occam factor against the goodness of fit? Is the QFT inference optimal in using alJ of the information relevant for learning [II]? What happens if our learning problem is strongly atypical of the prior distribution? Following Ref. [5], if N i. i. d. samples {Xi}, i = 1 ... N, are observed, then the probability that a particular density Q(x) gave rise to these data is given by P[Q(x)l{x.}] P[Q(x)] rr~1 Q(Xi) • - J[dQ(x)]P[Q(x)] rr~1 Q(Xi) , (1) where P[Q(x)] encodes our a priori expectations of Q. Specifying this prior on a space of functions defines a QFf, and the optimal least square estimator is then Q (I{ .}) (Q(X)Q(Xl)Q(X2) ... Q(XN)}(O) est X X. (Q(Xl)Q(X2) ... Q(XN ))(0) , (2) where ( ... )(0) means averaging with respect to the prior. Since Q(x) ~ 0, it is convenient to define an unconstrained field ¢(x), Q(x) = (l/io)exp[-¢(x)]. Other definitions are also possible [6], but we think that most of our results do not depend on this choice. The next step is to select a prior that regularizes the infinite number of degrees of freedom and allows learning. We want the prior P[¢] to make sense as a continuous theory, independent of discretization of x on small scales. We also require that when we estimate the distribution Q(x) the answer must be everywhere finite. These conditions imply that our field theory must be convergent at small length scales. For x in one dimension, a minimal choice is 1 [£2 11 - 1 f (8 11¢)2] [1 f ] P[¢(x)] = Z exp --2dx 8xll c5 io dxe-¢(x) -1 , (3) where'T/ > 1/2, Z is the normalization constant, and the c5-function enforces normalization of Q. We refer to i and 'T/ as the smoothness scale and the exponent, respectively. In [5] this theory was solved for large Nand 'T/ = 1: N (II Q(Xi))(O) ~ (4) i=1 Seff = (5) i8;¢c1 (x) + (6) where ¢cl is the 'classical' (maximum likelihood, saddle point) solution. In the effective action [Eq. (5)], it is the square root term that arises from integrating over fluctuations around the classical solution (Occam factors). It was shown that Eq. (4) is nonsingular even at finite N, that the mean value of ¢c1 converges to the negative logarithm of the target distribution P(x) very quickly, and that the variance of fluctuations 'Ij;(x) = ¢(x) [- log ioP( x)] falls off as ....., 1/ J iN P( x). Finally, it was speculated that if the actual i is unknown one may average over it and hope that, much as in Bayesian model selection [2], the competition between the data and the fluctuations will select the optimal smoothness scale i*. At the first glance the theory seems to look almost exactly like a Gaussian Process [4]. This impression is produced by a Gaussian form of the smoothness penalty in Eq. (3), and by the fluctuation determinant that plays against the goodness of fit in the smoothness scale (model) selection. However, both similarities are incomplete. The Gaussian penalty in the prior is amended by the normalization constraint, which gives rise to the exponential term in Eq. (6), and violates many familiar results that hold for Gaussian Processes, the representer theorem [12] being just one of them. In the semi--classical limit of large N, Gaussianity is restored approximately, but the classical solution is extremely non-trivial, and the fluctuation determinant is only the leading term of the Occam's razor, not the complete razor as it is for a Gaussian Process. In addition, it has no data dependence and is thus remarkably different from the usual determinants arising in the literature. The algorithm to implement the discussed density estimation procedure numerically is rather simple. First, to make the problem well posed [10, 11] we confine x to a box a ~ x ~ L with periodic boundary conditions. The boundary value problem Eq. (6) is then solved by a standard 'relaxation' (or Newton) method of iterative improvements to a guessed solution [13] (the target precision is always 10-5). The independent variable x E [0,1] is discretized in equal steps [104 for Figs. (l.a-2.b), and 105 for Figs. (3.a, 3.b)]. We use an equally spaced grid to ensure stability of the method, while small step sizes are needed since the scale for variation of ¢el (x) is [5] c5x '" Jl/NP(x) , which can be rather small for large N or smalll. (7) Since the theory is short scale insensitive, we can generate random probability densities chosen from the prior by replacing ¢ with its Fourier series and truncating the latter at some sufficiently high wavenumber kc [kc = 1000 for Figs. (l.a-2.b), and 5000 for Figs. (3.a, 3.b)]. Then Eq. (3) enforces the amplitude of the k'th mode to be distributed a priori normally with the standard deviation 21/ 2 (L) '1/ uk = l'l/-1/2 27rk (8) Coded in such a way, the simulations are extremely computationally intensive. Therefore, Monte Carlo averagings given here are only over 500 runs, fluctuation determinants are calculated according to Eq. (5), not using numerical path integration, and Qcl = (l/lo) eXP[-¢ed is always used as an approximation to Qest. As an example of the algorithm's performance, Fig. (l.a) shows one particular learning run for TJ = 1 and l = 0.2. We see that singularities and overfitting are absent even for N as low as 10. Moreover, the approach of Qel(X) to the actual distribution P(x) is remarkably fast: for N = 10, they are similar; for N = 1000, very close; for N = 100000, one needs to look carefully to see the difference between the two. To quantify this similarity of distributions, we compute the Kullback-Leibler divergence DKL(PIIQest) between the true distribution P(x) and its estimate Qest(x), and then average over the realizations of the data points and the true distribution. As discussed in [11], this learning curve A(N) measures the (average) excess cost incurred in coding the N + 1 'st data point because of the finiteness of the data sample, and thus can be called the "universalleaming curve". If the inference algorithm uses all of the information contained in the data that is relevant for learning ("predictive information" [11]), then [5, 9, 11, 10] A(N) '" (L/l)1/2'1/N1/2'1/-1. (9) We test this prediction against the learning curves in the actual simulations. For TJ = 1 and l = 0.4, 0.2, 0.05, these are shown on Fig. (l.b). One sees that the exponents are extremely close to the expected 1/2, and the ratios of the prefactors are within the errors from the predicted scaling'" 1/ Vi. All of this means that the proposed algorithm for finding densities not only works, but is at most a constant factor away from being optimal in using the predictive information of the sample set. Next we investigate how one's choice of the prior influences learning. We first stress that there is no such thing as a wrong prior. If one admits a possibility of it being wrong, then (a) (b) 3.5 10' Fit for 10 samples - e 1=0.4, data and best fit 3 Fit for 1000 samples ~ 1=0.2, data and best fit - - - Fit for 100000 samples -<>1=0.05, data and best fit Actual distribution , .. 2.5 ~. 10-' ... o. :s:: . . . . Cl 2 ~' "0", . .. ~ . . < o· O'i ·5 "'". 10-' '0. • 0 '. 0.5 ". 0 10-' 10' 10' 10' 10' 0 0.2 0.4 0.6 0.8 10' x N Figure 1: (a) QcJ found for different N at f = 0.2. (b) A as a function of N and f. The best fits are: for f = 0.4, A = (0.54 ± 0.07)N-o. 483±O.014; for f = 0.2, A = (0.83 ± 0.08)N-o.493±O.09; for f = 0.05, A = (1.64 ± 0.16)N-o.507±O.09. it does not encode all of the a priori knowledge! It does make sense, however, to ask what happens if the distribution we are trying to learn is an extreme outlier in the prior P[¢]. One way to generate such an example is to choose a typical function from a different prior PI[¢], and this is what we mean by 'learning with a wrong prior.' If the prior is wrong in this sense, and learning is described by Eqs. (2-6), then we still expect the asymptotic behavior, Eq. (9), to hold; only the prefactors of A should change, and those must increase since there is an obvious advantage in having the right prior; we illustrate this in Figs. (2.a, 2.b). For Fig. (2.a), both PI[¢] and P[¢] are given by Eq. (3), but pI has the 'actual' smoothness scale fa = 0.4, 0.05, and for P the 'learning' smoothness scale is f = 0.2 (we show the case fa = f = 0.2 again as a reference). The A,...., l/VN behavior is seen unmistakably. The prefactors are a bit larger (unfortunately, insignificantly) than the corresponding ones from Fig. (1.b), so we may expect that the 'right' f, indeed, provides better learning (see later for a detailed discussion). Further, Fig. (2.b) illustrates learning when not only f, but also 'fI is 'wrong' in the sense defined above. We illustrate this for 'fIa = 2, 0.8, 0.6, 0 (remember that only 'fIa > 0.5 removes UV divergences). Again, the inverse square root decay of A should be observed, and this is evident for 'fIa = 2. The 'fIa = 0.8,0.6,0 cases are different: even for N as high as 105 the estimate of the distribution is far from the target, thus the asymptotic regime is not reached. This is a crucial observation for our subsequent analysis of the smoothness scale determination from the data. Remarkably, A (both averaged and in the single runs shown) is monotonic, so even in the cases of qualitatively less smooth distributions there still is no overfitting. On the other hand, A is well above the asymptote for 'fI = 2 and small N , which means that initially too many details are expected and wrongfully introduced into the estimate, but then they are almost immediately (N ,...., 300) eliminated by the data. Following the argument suggested in [5], we now view P[¢], Eq. (3), as being a part of some wider model that involves a prior over l. The details of the prior are irrelevant, however, if 8eff(f), Eq. (5), has a minimum that becomes more prominent as N grows. We explicitly note that this mechanism is not tuning of the prior's parameters, but Bayesian inference at work: f* emerges in a competition between the smoothness, the data, and the Occam terms to make 8 eff smaller, and thus the total probability of the data is larger. In its (a) (b) 10' 0"', 0, 0 _ 10-' '" 0, . 0' '~ . < 0, '0 'Q. '0 < 10-' '0- , 10-' '", "', ______ 11a=1, '.=0.2, data, best fit ''', "'" 10-3 - 9- 11a=2, '.=O.l , data, best fit '.=0.2, data and best fit '0, - 0 TJ. =0.8, '. =0.1, data, best fit - 0'.=0.4, data and best fit - e '. =0.05, data and best fit 10-4 TJ.=0.6, '. =0.1 , data, one run 6 TJ.=O, '.=0.12, data, one run 10-' 10' 10' 10' 10· 10' 10' 10' 10' 10· 10' N N Figure 2: (a) A as a function of N and fa. Best fits are: for fa = 0.4, A = (0.56 ± 0.08)N-o.477±O.015; for fa = 0.05, A = (1.90 ± 0.16)N-o.502±o.oo8. Learning is always with f = 0.2. (b) A as a function of N, 1/a and fa. Best fits: for 1/a = 2, fa = 0.1, A = (0.40±0.05)N-o.493±O.013; for1/a = 0.8, fa = 0.1, A = (1.06±0.08)N-o.355±O.008. f = 0.2 for all graphs, but the one with 1/a = 0, for which f = 0.1. turn, larger probability means shorter total code length. The data term, on average, is equal to NDKL(PIIQc1), and, for very regular P(x) (an implicit assumption in [5]), it is small. Thus only the kinetic and the Occam terms matter, and f* ,..., N 1/3[5]. For less regular distributions P(x), this is not true [cf. Fig. (2.b)]. For 1/ = 1, Qc1(X) approximates large-scale features of P(x) very well, but details at scales smaller than,..., Jf/NL are averaged out. If P(x) is taken from the prior, Eq. (3), with some 1/a, then these details fall off with the wave number k as ,..., k-'T/a. Thus the data term is,..., N1.5-'T/af'T/a- O.5 and is not necessarily small. For 1/a < 1.5 this dominates the kinetic term and competes with the fluctuations to set f* ,..., N('T/a- 1)/'T/a, 1/a < 1.5. (10) There are two remarkable things about Eq. (10). First, for 1/a = 1, f* stabilizes at some constant value, which we expect to be equal to fa. Second, even for 1/ f:. 1/a, Eqs. (9, 10) ensure that A scales as ,..., N 1/ 2'T/a -1 , which is at worst a constant factor away from the best scaling, Eq. (9), achievable with the 'right' prior, 1/ = 1/a. So, by allowing f* to vary with N we can correctly capture the structure of models that are qualitatively different from our expectations (1/ f:. 1/a) and produce estimates of Q that are extremely robust to the choice of the prior. To our knowledge, this feature has not been noted before in a reference to a nonparametric problem. We present simulations relevant to these predictions in Figs. (3.a, 3.b). Unlike on the previous Figures, the results are not averaged due to extreme computational costs, so all our further claims have to be taken cautiously. On the other hand, selecting f* in single runs has some practical advantages: we are able to ensure the best possible learning for any realization of the data. Fig. (3.a) shows single learning runs for various 1/a and fa . In addition, to keep the Figure readable, we do not show runs with 1/a = 0.6, 0.7, 1.2, 1.5,3, and 1/a -+ 00, which is a finitely parameterizable distribution. All of these display a good agreement with the predicted scalings: Eq. (10) for 1/a < 1.5, and f* ,..., N 1/3 otherwise. Next we calculate the KL divergence between the target and the estimate at f = f*; the average of this divergence over the samples and the prior is the learning curve [cf. Eq. (9)]. For 1/a = 0.8,2 we plot the divergencies on Fig. (3.b) side by side with their fixed f = 0.2 (a) ,-------:c---.----.--".,---------, "0.. - - -0- _ _ ~_ 10-2 --+--11a-1, 'a-0.2 ""_ 0 - '1.=0.8. 1.=0.1 --<>- '1.=1. variable I •• mean 0.12 o '1.=2. 1. =0.1 10-3 l'=========='-_~_----.J 1~ 1~ 1~ N (b) 10' ,-----~-----~----~ ~ '1.=0.8.1.=0.1. I=f 10-4 ~ '1.=0.8. 1.=0.1. 1=0.2 _ 0 _ '1.=2. 1.=0.1 . 1= f _" _ '1.=2. 1.=0.1. 1=0.2 N '0. Figure 3: (a) Comparison of learning speed for the same data sets with different a priori assumptions. (b) Smoothness scale selection by the data. The lines that go off the axis for small N symbolize that Seff monotonically decreases as £ -+ 00. analogues. Again, the predictions clearly are fulfilled. Note, that for 'TJa :I 'TJ there is a qualitative advantage in using the data induced smoothness scale. The last four Figures have illustrated some aspects of learning with 'wrong' priors. However, all of our results may be considered as belonging to the 'wrong prior' class. Indeed, the actual probability distributions we used were not nonparametric continuous functions with smoothness constraints, but were composed of kc Fourier modes, thus had 2kc parameters. For finite parameterization, asymptotic properties of learning usually do not depend on the priors (cf. [3, 11]), and priorless theories can be considered [14]. In such theories it would take well over 2kc samples to even start to close down on the actual value of the parameters, and yet a lot more to get accurate results. However, using the wrong continuous parameterization [4>(x)] we were able to obtain good fits for as low as 1000 samples [cf. Fig. (l.a)] with the help of the prior Eq. (3). Moreover, learning happened continuously and monotonically without huge chaotic jumps of overfitting that necessarily accompany any brute force parameter estimation method at low N. So, for some cases, a seemingly more complex model is actually easier to learn! Thus our claim: when data are scarce and the parameters are abundant, one gains even by using the regularizing powers of wrong priors. The priors select some large scale features that are the most important to learn first and fill in the details as more data become available (see [11] on relation of this to the Structural Risk Minimization theory). If the global features are dominant (arguably, this is generic), one actually wins in the learning speed [cf. Figs. (l.b, 2.a, 3.b)]. If, however, small scale details are as important, then one at least is guaranteed to avoid overfitting [cf. Fig. (2.b)]. One can summarize this in an Occam-like fashion [11]: if two models provide equally good fits to data, a simpler one should always be used. In particular, the predictive information, which quantifies complexity [11], and of which A is the derivative, in a QFT model is ......, N 1/ 2TJ, and it is ......, kc log N in the parametric case. So, for kc > N 1/ 2TJ, one should prefer a 'wrong' QFT formulation to the correct finite parameter model. These results are very much in the spirit of our whole program: not only is the value of £* selected that simplifies the description of the data, but the continuous parameterization itself serves the same purpose. This is an unexpectedly neat generalization of the MDL principle [3] to non parametric cases. Summary: The field theoretic approach to density estimation not only regularizes the learning process but also allows the self-consistent selection of smoothness criteria through an infinite dimensional version of the Occam factors. We have shown numerically that this works, even more clearly than was conjectured: for "la < 1.5, the learning curve truly becomes a property of the data, and not of the Bayesian prior! If we can extend these results to other "la and combine this work with the reparameterization invariant formulation of [7, 8], this should give a complete theory of Bayesian learning for one dimensional distributions, and this theory has no arbitrary parameters. In addition, if this theory properly treats the limit "la -* 00, we should be able to see how the well-studied finite dimensional Occam factors and the MDL principle arise from a more general nonparametric formulation. References [1] D. MacKay, Neural Compo 4,415-448 (1992). [2] V. Balasubramanian, Neural Compo 9, 349-368 (1997), http://xxx . lanl . gov/abs/ a d ap - org/9601001 . [3] J. Rissanen. Stochastic Complexity and Statistical Inquiry. World Scientific, Singapore (1989). [4] D. MacKay, NIPS, Tutorial Lecture Notes (1997), ftp : //wol . ra . phy . c am. ac . uk/pub/ma ckay/gp . ps . gz. [5] W. Bialek, C. Callan, and S. Strong, Phys. Rev. Lett. 77, 4693-4697 (1996), http : //xxx.l anl . gov/ abs/cond-mat/96071BO. [6] T. Holy, Phys. Rev. Lett. 79,3545-3548 (1997), http : //xxx . l anl . gov/ abs/physics/9706015 . [7] V. Periwal, Phys. Rev. Lett. 78,4671-4674 (1997), http://xxx . lanl . gov/he p - th/9703135 . [8] V. Periwal, Nuc!. Phys. B, 554 [FS], 719-730 (1999), http://xxx.lanl . gov/ adap- org/9B01001 . [9] T. Aida, Phys. Rev. Lett. 83, 3554-3557 (1999), http : //xxx . l anl . gov/cond- mat/9911474. [10] A more detailed version of our current analysis may be found in: I. Nemenman, Ph.D. Thesis, Princeton, (2000), http : //xxx . l a n l . gov/ abs/phys i cs/OOO 9032 . [11] W. Bialek, I. Nemenman, N. Tishby. Preprint http : //xxx . l anl . gov/ abs/physics/0007070 . [12] G. Wahba. In B. Sh6lkopf, C. 1. S. Burges, and A. 1. Smola, eds., Advances in Kernel Methods-Support Vector Learning, pp. 69-88. MIT Press, Cambridge, MA (1999), ftp : //ftp . st a t . wisc . edu/pub/wahba /nips97rr . ps . [13] w. Press et al. Numerical Recipes in C. Cambridge UP, Cambridge (1988). [14] Vapnik, V. Statistical Learning Theory. John Wiley & Sons, New York (1998).
|
2000
|
140
|
1,799
|
Algebraic Information Geometry for Learning Machines with Singularities Sumio Watanabe Precision and Intelligence Laboratory Tokyo Institute of Technology 4259 Nagatsuta, Midori-ku, Yokohama, 226-8503 Japan swatanab@pi.titech.ac.jp Abstract Algebraic geometry is essential to learning theory. In hierarchical learning machines such as layered neural networks and gaussian mixtures, the asymptotic normality does not hold, since Fisher information matrices are singular. In this paper, the rigorous asymptotic form of the stochastic complexity is clarified based on resolution of singularities and two different problems are studied. (1) If the prior is positive, then the stochastic complexity is far smaller than BIO, resulting in the smaller generalization error than regular statistical models, even when the true distribution is not contained in the parametric model. (2) If Jeffreys' prior, which is coordinate free and equal to zero at singularities, is employed then the stochastic complexity has the same form as BIO. It is useful for model selection, but not for generalization. 1 Introduction The Fisher information matrix determines a metric of the set of all parameters of a learning machine [2]. If it is positive definite, then a learning machine can be understood as a Riemannian manifold. However, almost all learning machines such as layered neural networks, gaussian mixtures, and Boltzmann machines have singular Fisher metrics. For example, in a three-layer perceptron, the Fisher information matrix J( w) for a parameter w is singular (det J( w) = 0) if and only if w represents a small model which can be realized with the fewer hidden units than the learning model. Therefore, when the learning machine is in an almost redundant state, any method in statistics and physics that uses a quadratic approximation of the loss function can not be applied. In fact , the maximum likelihood estimator is not subject to the asymptotic normal distribution [4]. The Bayesian posterior probability converges to a distribution which is quite different from the normal one [8]. To construct a mathematical foundation for such learning machines, we clarified the essential relation between algebraic geometry and Bayesian statistics [9,10]. In this paper, we show that the asymptotic form of the Bayesian stochastic complexity is rigorously obtained by resolution of singularities. The Bayesian method gives powerful tools for both generalization and model selection, however, the appropriate prior for each purpose is quite different. 2 Stochastic Complexity Let p(xlw) be a learning machine, where x is a pair of an input and an output, and w E Rd is a parameter. We prepare a prior distribution 'fJ( w) on Rd. Training samples xn = (Xl, x 2 , ... , Xn) are independently taken from the true distribution q(x), which is not contained in p(xlw) in general. The stochastic complexity F(xn) and its average F (n) are defined by F(xn) = - log J ITp(Xilw) 'fJ(w)dw i=l and F(n) = Exn{F(xn)}, respectively, where Exn{.} denotes the expectation value overall training sets. The stochastic complexity plays a central role in Bayesian statistics. Firstly, F(n+1)-F(n)-S, where S = - J q(x) logq(x)dx, is equal to the average Kullback distance from q(x) to the Bayes predictive distribution p(xlxn), which is called the generalization error denoted by G(n). Secondly, exp(-F(Xn)) is in proportion to the posterior probability of the model, hence, the best model is selected by minimization of F (xn) [7]. And lastly, if the prior distribution has a hyperparameter (), that is to say, 'fJ(w) = 'fJ(wl()), then it is optimized by minimization of F(xn) [1]. We define a function Fo(n) using the Kullback distance H(w), Fo(n) = - log J exp( -nH(w))'fJ(w)dw, H(w) = J q(x) log pf~~2) dx. Then by Jensen's inequality, F(n) - Sn ::; Fo(n). Moreover, we assume that L(x,w) == logq(x) - logp(xlw) is an analytic function from w to the Hilbert space of all square integrable functions with the measure q(x)dx, and that the support of the prior W = supp 'fJ is compact. Then H(w) is an analytic function on W, and there exists a constant CI > 0 such that, for an arbitrary n, n FO("2) CI ::; F(n) - Sn ::; Fo(n). (1) 3 General Learning Machines In this section, we study a case when the true distribution is contained in the parametric model, that is to say, there exists a parameter Wo E W such that q(x) = p(x lwo). Let us introduce a zeta function J(z) (z E C) of H(w) and a state density function v(t) by J(z) = J H(wY'fJ(w)dw, v(t) = J J(t - H(w))'fJ(w)dw. Then, J( z) and Fo(n) are represented by the Mellin and the Laplace transform of v(t), respectively. J(z) = lh tZv(t)dt, Fo(n) = - log lh exp(-nt)v(t)dt, where h = maxWEW H(w). Therefore Fo(n), v(t), and J(z) are mathematically connected. It is obvious that J(z) is a holomorphic function in Re(z) > O. Moreover, by using the existence of Sato-Bernstein's b-function [6], it can be analytically continued to a meromorphic function on the entire complex plane, whose poles are real, negative, and rational numbers. Let -AI > -A2 > -A3 > ... be the poles of J (z) and mk be the order of - Ak. Then, by using the inverse Mellin tansform, it follows that v(t) has an asymptotic expansion with coefficients {Ckm}, 00 mk v(t) ~ L L CkmtAk- 1(- logt)m-l (t ---> +0). k=lm=1 Therefore, also Fo (n) has an asymptotic expansion, by putting A = Al and m = ml, Fo (n) = A log n - (m - 1) log log n + 0 (1) , which ensures the asymptotic expansion of F(n) by eq.(l), F(n) = Sn + Alogn - (m - 1) log log n + 0(1). The Kullback distance H(w) depends on the analytic set Wo = {w E W; H(w) = O}, resulting that both A and m depend on Woo Note that, if the Bayes generalization error G(n) = F(n + 1) - F(n) - S has an asymptotic expansion, it should be AI n - (m - 1) I (n log n). The following lemma is proven using the definition of Fo(n) and its asymptotic expansion. Lemma 1 (1) Let (Ai, mi) (i = 1,2) be constants corresponding to (Hi(W), rpi(W)) (i = 1, 2). If H1(w) :::::: H2(w) and rpl(W) 2': rp2(W), then 'AI < A2' or 'AI = A2 and ml 2': m2 '. (2) Let (Ai , mi) (i = 1, 2) be constants corresponding to (Hi(Wi), rpi(Wi)) (i = 1, 2). Let W = (WI, W2), H(w) = HI (wI) + H2(W2), and rp(w) = rpI(Wl)rp2(W2). Then the constants of (H(w) , rp(w)) are A = Al + A2 and m = ml + m2 - 1. The concrete values of A and m can be algorithmically obtained by the following theorem. Let Wi be the open kernel of W (the maximal open set contained in W). Theorem 1 (Resolution of Singularities, Hironaka [5}) Let H(w) 2': 0 be a real analytic function on Wi. Then there exist both a real d-dimensional manifold U and a real analytic function g : U ---> Wi such that, in a neighborhood of an arbitrary U E U, (2) where a( u) > 0 is an analytic function and {sd are non-negative integers. M oreover, for arbitrary compact set K c W, g-1 (K) c U is a compact set. Such a function g( u) can be found by finite blowing-ups. Remark. By applying eq.(2) to the definition of J( z), one can see the integral in J(z) is decomposed into a direct product of the integral of each variable [3]. Applications to learning theory are shown in [9,10]. In general it is not so easy to find g(u) that gives the complete resolution of singularities, however, in this paper, we show that even a partial resolution mapping gives an upper bound of A. Definition. We introduce two different priors. (1) The prior distribution rp(w) is called positive if rp(w) > 0 for an arbitrary wE Wi, (W = supp<p(w)). (2) The prior distribution ¢( w) is called Jeffreys' one if 1 ¢(w) = :zVdetI(w), J 8L 8L Iij(w) = ~~p(x l w)dx, UWi uWj where Z is a normalizing constant and I(w) is the Fisher information matrix. In neural networks and gaussian mixtures, Jeffreys' prior is not positive, since det I( w) = 0 on the parameters which represent the smaller models. Theorem 2 Assume that ther'e exists a par'ameter Wo E Wi such that q(x) = p( x Iwo). Then followings hold. (1) If the prior is positive, then 0 <).::; d/2 and 1::; m::; d. Ifp(xlw) satisfies the condition of the asymptotic normality, then). = d/2 and m = 1. (2) If Jeffreys' prior is applied, then '). > d/2' or '). = d/2 and m = 1 '. (Outline of the Proof) (1) In order to examine the poles of J(z), we can divide the parameter space into the sum of neighborhoods. Since H( w) is an analytic function, in arbitrary neighborhood of Wo that satisfies H(wo) = 0, we can find a positive definite quadratic form which is smaller than H(w). The positive definite quadratic form satisfies). = d/2 and m = 1. By using Lemma 1 (1), we obtain the first half. (2) Because Jeffreys' prior is coordinate free, we can study the problem on the parameter space U instead of Wi in eq. (2). Hence, there exists an analytic function t(x, u) such that, in each local coordinate, L(x, u) = L(x, g( u)) = t(x, U)U~l ... U~d. For simplicity, we assume that Si > 0 (i = 1,2, ... , d). Then 8L (8t ) 8 1 8· 1 8d ~ = ~Wi + Sit u1 .. ,ui' .. 'Ud . UWi UWi By using blowing-ups Ui = V1V2'" Vi (i = 1,2, ... , d) and a notation rYp = sp+sp+l + ... + Sd, it is easy to show d d detI(v) ::; II v;dap+p-d-2, du = (II Ivpld-p)dv. (3) p=1 p=1 By using H(g(u)y = 11 v;apz and Lemma.1 (1), in order to prove the latter half of the theorem, it is suthcient to prove that has a pole z = -d/2 with the order m = 1. Direct calculation of integrals in J(z) completes the theorem. (Q.E.D.) 4 Three-Layer Percept ron In this section, we study some cases when the learner is a three-layer percept ron and the true distribution is contained and not contained. We define the three layer percept ron p(x, vlw) with JII! input units, K hidden units, and N output units, where x is an input, V is an output, and w is a parameter. p(x, vlw) r(x) 1 2 (27ru2)N/2 exp(- 2u211v - fK(x ,w)11 ) K fK(x,w) = Laku(bk·x+Ck) k=l where w = {(ak' bk, Ck); ak E RN, bk E RM, Ck E Rl}, r(x) is the probability density on the input, and u2 is the variance of the output (either r(x) or u is not estimated). Theorem 3 If the true distribution is represented by the three-layer perceptron with Ko ::; K hidden units, and if positive prior is employed, then 1 . A ::; "2 {Ko(M + N + 1) + (K - Ko) mm(M + 1, N)}. (4) (Outline of Proof) Firstly, we consider a case when g(x) = O. Then, (5) Let ak = (akl' ... , akN) and bk = (bk1 , ... , bkM). Let us consider a blowing-up, au = 0:, akj = o:a~j (k -=/:-l,j -=/:-1), bk1 = b~l' Ck = c~. Then da db dc = o:KN-1do: da' db' dc' and there exists an analytic function H1(a' , b', c') such that H(a, b,c) = 0:2H1(a',b',c'). Therefore J(z) has a pole at z = - K N /2. Also by using another blowing-up, then, da db dc = 0:(M+1)K-1do: da" db" dc" and there exists an analytic function H2(al ,bl ,c") such that H(a,b,c) = 0:2H2(al ,bl ,c"), which shows that J(z) has a pole at z = -K(M + 1)/2. By combining both results, we obtain A ::; (K/2) min(M + 1, N) . Secondly, we prove the general case, 0 < Ko ::; K. Then, (6) By combining Lemma. 1 (2) and the above result, we obtain the Theorem. (Q.E.D.). If the true regression function g(x) is not contained in the learning model, we assume that, for each 0 ::; k ::; K, there exists a parameter w~k) E W that minimizes the square error We use notations E(k) k) min(M + 1, N). (1/2){k(M + N + 1) + (K Theorem 4 If the true regression function is not contained in the learning model and positive prior is applied, then F(n):,,::: min [n2E(k) +'\(k)lognJ +0(1). O~k~K a (Outline of Proof) This theorem can be shown by the same procedure as eq.(6) in the preceding theorem. (Q.E.D.) If G(n) has an asymptotic expansion G(n) = 2::~1 aqfq(n), where fq(n) is a decreasing function of n that satisfies fq+1(n) = o(fq(n)) and fQ(n) = l/n, then G ( n):,,::: min [ E ( k) + ,\ ( k ) ] O~k~K a2 n' which shows that the generalization error of the layered network is smaller than the regular statistical models even when the true distribution is not contained in the learning model. It should be emphasized that the optimal k that minimizes G(n) is smaller than the learning model when n is not so large, and it becomes larger as n increases. This fact shows that the positive prior is useful for generalization but not appropriate for model selection. Under the condition that the true distribution is contained in the parametric model, Jeffreys' prior may enable us to find the true model with higher probability. Theorem 5 If the true regression function is contained in the three-layer perceptron and Jeffrey's prior is applied, then ,\ = d/2 and m = 1, even if the Fisher metric is degenerate at the true parameter. (Outline of Proof) For simplicity, we prove the theorem for the case g(x) = O. The general cases can be proven by the same method. By direct calculation of the Fisher information matrix, there exists an analytic function D(b, e) ~ 0 such that K N detI(w) = II(Lakp)2(M+1)D(b,e) k=1 p=1 By using a blowing-up we obtain H(w) = a 2H1(a',b',e') same as eq.(5), detI(w) ex a 2(M+1)K, and da db de = aN K -1 da da' db de. The integral }(z) = 1 a 2za(M+1)K+NK-1da 1"'1«' has a pole at z = -(M + N + 1)K/2. By combining this result with Theorem 3, we obtain Theorem.5. (Q.E.D.). 5 Discussion In many applications of neural networks, rather complex machines are employed compared with the number of training samples. In such cases, the set of optimal parameters is not one point but an analytic set with singularities, and the set of almost optimal parameters {Wi H(w) < E} is not an 'ellipsoid'. Hence neither the Kullback distance can be approximated by any quadratic form nor the saddle point approximation can be used in integration on the parameter space. The zeta function of the Kullback distance clarifies the behavior of the stochastic complexity and resolution of singularities enables us to calculate the learning efficiency. 6 Conclusion The relation between algebraic geometry and learning theory is clarified, and two different facts are proven. (1) If the true distribution is not contained in a hierarchical learning model, then by using a positive prior, the generalization error is made smaller than the regular statistical models. (2) If the true distribution is contained in the learning model and if Jeffreys' prior is used, then the average Bayesian factor has the same form as BIC. Acknowledgments This research was partially supported by the Ministry of Education, Science, Sports and Culture in Japan, Grant-in-Aid for Scientific Research 12680370. References [1] Akaike, H. (1980) Likelihood and Bayes procedure. Bayesian Statistics, (Bernald J.M. eds.) University Press, Valencia, Spain, 143-166. [2] Amari, S. (1985) Differential-geometrical methods in Statistics. Lecture Notes in Statistics, Springer. [3] Atiyah, M. F. (1970) Resolution of singularities and division of distributions. Comm. Pure and Appl. Math. , 13, pp.145-150. [4] Dacunha-Castelle, D., & Gassiat, E. (1997). Testing in locally conic models, and application to mixture models. Probability and Statistics, 1, 285-317. [5] Hironaka, H. (1964) Resolution of Singularities of an algebraic variety over a field of characteristic zero. Annals of Math., 79,109-326. [6] Kashiwara, M. (1976) B-functions and holonomic systems. Inventions Math., 38,33-53. [7] Schwarz, G. (1978) Estimating the dimension of a model. Ann. of Stat., 6 (2), 461-464. [8] Watanabe, S. (1998) On the generalization error by a layered statistical model with Bayesian estimation. IEICE Transactions, J81-A (10), 1442-1452. English version: (2000)Electronics and Communications in Japan, Part 3, 83(6) ,95-104. [9] Watanabe, S. (2000) Algebraic analysis for non-regular learning machines. Advances in Neural Information Processing Systems, 12, 356-362. [10] Watanabe, S. (2001) Algebraic analysis for non-identifiable learning machines. Neural Computation, to appear.
|
2000
|
141
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.